Avionics Systems and Architecture
Avionics Systems and Architecture
3003
OBJECTIVES:
To introduce the basic of avionics and its need for civil and military aircrafts
To impart knowledge about the avionic architecture and various avionics data buses
To gain more knowledge on various avionics subsystems
TOTAL: 45 PERIODS
OUTCOMES:
Ability to built Digital avionics architecture
Ability to Design Navigation system
Ability to design and perform analysis on air system
TEXT BOOKS:
1. Albert Helfrick.D., "Principles of Avionics", Avionics Communications Inc., 2004
2. Collinson.R.P.G. "Introduction to Avionics", Chapman and Hall, 1996.
REFERENCES:
1. Middleton, D.H., Ed., "Avionics systems, Longman Scientific and Technical", Longman
Group UK Ltd., England, 1989.
2. Spitzer, C.R. "Digital Avionics Systems", Prentice-Hall, Englewood Cliffs, N.J.,U.S.A.
1993.
3. Spitzer. C.R. "The Avionics Hand Book", CRC Press, 2000
4. Pallet.E.H.J., "Aircraft Instruments and Integrated Systems", Longman Scientific
UNIT I INTRODUCTION TO AVIONICS
Avionics are the electronic systems used on aircraft, artificial satellites,
and spacecraft. Avionic systems include communications, navigation, the display and
management of multiple systems, and the hundreds of systems that are fitted to aircraft to
perform individual functions. These can be as simple as a searchlight for a police
helicopter or as complicated as the tactical system for an airborne early warning platform.
The term avionics is a portmanteau of the words aviation and electronics.
History
Founded in 1957, the Aircraft Electronics Association (AEA) represents more than 1,300
member companies, including government-certified international repair stations specializing
in maintenance, repair and installation of avionics and electronic systems in general aviation
aircraft. The AEA membership also includes manufacturers of avionics equipment,
instrument repair facilities, instrument manufacturers, airframe manufacturers, test equipment
manufacturers, major distributors, engineers and educational institutions.
Aircraft avionics
The cockpit of an aircraft is a typical location for avionic equipment, including
control, monitoring, communication, navigation, weather, and anti-collision systems. The
majority of aircraft power their avionics using 14- or 28-volt DC electrical systems; however,
larger, more sophisticated aircraft (such as airliners or military combat aircraft)
have AC systems operating at 400 Hz, 115 volts AC. There are several major vendors of
flight avionics, including Panasonic Avionics Corporation, Honeywell (which now
owns Bendix/King),Rockwell Collins, Thales Group, GE Aviation Systems, Garmin, Parker
Hannifin, UTC Aerospace Systems and Avidyne Corporation.
Need for Avionics in civil and military aircraft and space systems:
Avionics (defined as combination of aviation and electronics) are the advanced electronics
used in aircraft, spacecraft and satellites. These systems perform various functions include
communication, navigation, flight control, display systems, flight management etc. There is a
great need for advanced avionics in civil, military and space systems.
Civil aircraft
For better flight control, performing computations and increased control over flight
control surfaces.
For navigation, provide information using sensors like Altitude and Head Reference
System (AHRS).
Provide air data like altitude, atmospheric pressure, temperature, etc.
Reduce crew workload.
Increased safety for crew and passengers.
Reduction in aircraft weight which can be translated into increased number of
passengers or long range.
All weather operation and reduction in aircraft maintenance cost.
Military aircraft
Avionics in fighter aircraft eliminates the need for a second crew member like navigator,
observer etc., which helps in reducing the training costs.
A single seat fighter is lighter and costs less than an equivalent two seat version.
Improved aircraft performance, control and handling.
Reduction in maintenance cost.
Secure communication.
Space systems
Fly-by-wire communication system used for space vehicle's attitude and translation
control.
Sensors used in the spacecraft for obtaining data.
Autopilot redundancy system.
On-board computers used in satellites for processing the data.
Integrated Avionics system is used to pilot for all coordinated information available from a
single source. For software engineer this system is used to have full access to shared data
about situation, mission, and systems and to hardware designer for making systems as a
single unit with ample bandwidth to support processing.
Navigation system
Communication system
Electronic Warfare
Flight Control system
Primary Flight Displays
Navigation System:
Navigation is the determination of position and direction on or above the surface of
the Earth. Avionics can use satellite-based systems (such as GPS and WAAS), ground-based
systems (such as VOR or LORAN), or any combination thereof. Navigation systems
calculate the position automatically and display it to the flight crew on moving map displays.
Older avionics required a pilot or navigator to plot the intersection of signals on a paper map
to determine an aircraft's location; modern systems calculate the position automatically and
display it to the flight crew on moving map displays. Navigation information such as aircraft
position, ground speed and track angle (direction of the aircraft relative to true north) is
clearly essential for the aircraft mission whether civil or military. Navigation system is
divided into two types:
1. Dead reckoning navigation systems (DR)
2. Radio Navigation systems
They are: ADF, VOR, DME, OMEGA, GPS, ILS, and TACAN
Communication System:
Communications System connects the flight deck to the ground and the flight deck to
the passengers. On-board communications are provided by public-address systems and
aircraft intercoms. The VHF aviation communication system works on the airband of
118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz
in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as
aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the
conversation is performed in simplex mode. Aircraft communication can also take place
using HF (especially for trans-oceanic flights) or satellite communication.
Electronic Warfare:
Electronic warfare (EW) is any action involving the use of the electromagnetic
spectrum or directed energy to control the spectrum, attack an enemy, or impede enemy
assaults via the spectrum. The purpose of electronic warfare is to deny the opponent the
advantage of, and ensure friendly unimpeded access to, the EM spectrum. EW can be applied
from air, sea, land, and space by manned and unmanned systems, and can target humans,
communications, radar, or other assets.
Subdivisions
RAF Menwith Hill, a largeECHELON site in the United Kingdom, and part of the UK-
USA Security Agreement
Electronic warfare includes three major subdivisions: electronic attack (EA), electronic
protection (EP), and electronic warfare support (ES).
Electronic attack (EA)
Electronic attack (EA) involves the use of EM energy, directed energy, or anti-radiation
weapons to attack personnel, facilities, or equipment with the intent of degrading,
neutralizing, or destroying enemy combat capability. In the case of EM energy, this action is
referred to as jamming and can be performed on communications systems (see Radio
jamming) or radar systems (see Radar jamming and deception).
Electronic Protection (EP)
Cockpit controls
Primary controls
Generally, the primary cockpit flight controls are arranged as follows:
The control yokes also vary greatly amongst aircraft. There are yokes where roll is controlled
by rotating the yoke clockwise/counter clockwise (like steering a car) and pitch is controlled
by tilting the control column towards you or away from you, but in others the pitch is
controlled by sliding the yoke into and out of the instrument panel (like most Cessna’s, such
as the 152 and 172), and in some the roll is controlled by sliding the whole yoke to the left
and right (like the Cessna 162). Centre sticks also vary between aircraft. Some are directly
connected to the control surfaces using cables, others (fly-by-wire airplanes) have a computer
in between which then controls the electrical actuators.
Even when an aircraft uses variant flight control surfaces such as a V-tail
ruddervator, flaperons, or elevons, to avoid pilot confusion the aircraft's flight control system
will still be designed so that the stick or yoke controls pitch and roll conventionally, as will
the rudder pedals for yaw. In some aircraft, the control surfaces are not manipulated with a
linkage. In ultra light aircraft and motorized hang gliders, for example, there is no mechanism
at all. Instead, the pilot just grabs the lifting surface by hand (using a rigid frame that hangs
from its underside) and moves it.
Secondary controls
In addition to the primary flight controls for roll, pitch, and yaw, there are often
secondary controls available to give the pilot finer control over flight or to ease the workload.
The most commonly available control is a wheel or other device to control elevator trim, so
that the pilot does not have to maintain constant backward or forward pressure to hold a
specific pitch attitude (other types of trim, for rudder and ailerons, are common on larger
aircraft but may also appear on smaller ones). Many aircraft have wing flaps, controlled by a
switch or a mechanical lever or in some cases are fully automatic by computer control, which
alter the shape of the wing for improved control at the slower speeds used for takeoff and
landing. Other secondary flight control systems may be available, including slats, spoilers, air
brakes and variable-sweep wings.
Mechanical FCS
The earliest ventures into manned flight were constrained to either tethered balloon rides or
short hops in a glider. The necessity of formal flight control systems was not realized until it
was demonstrated that extended flight times over substantial distances were feasible. The
experiments of Otto Lilienthal in the 1880s through 1890s showed that limited flight control
was possible through a process of weight shifting. Lilienthal discovered that by simply
changing the position of his body relative to the aircraft’s centre of gravity he could affect its
motion in any direction, much like hang gliders do today. Building on the discoveries of
Lilienthal, in the mid 1890s Octave Chanute began development of what we would now call a
mechanical flight control system for use on gliders of his own design. He worked closely
with the Wright brothers and was largely responsible for many of the control systems
innovations present on their 1903 Flyer. Mechanical systems are characterized by a physical
linkage between the pilot and control surfaces, as shown in Figure 1.
The pilot’s control inputs are transferred to the control surfaces via a series of cables
and/or pushrods. This type of FCS proved to be very effective in lightweight and relatively
slow moving aircraft because they were inexpensive to build, simple to maintain, and
provided the best control surface feedback of any FCS. However, mechanical systems tend to
be very sensitive to temperature and are prone to accelerated wear compared to the alternative
methods discussed below. Also, as designers began to build bigger and faster aircraft, they
discovered that the increased aerodynamic forces incident on the control surfaces were
simply too great for pilots to counter. Engineers had to develop a system to augment the
pilot’s commands.
Hydraulic FCS
The first such augmented control system, known as a boosted FCS, appeared in WWII era
aircraft. As Figure 2 depicts, the boosted system retained the physical coupling between the
cockpit and control surfaces with the addition of hydraulic spool valves and ram cylinders
tied in parallel into the input controls and actuators.
Hydraulic flight control systems such as the one shown in Figure were introduced to
cater to aircraft pushing the limits of control surface loading. As the name implies, the
hydraulic system relies primarily on a series of lines and hoses feeding hydraulic actuators
from a pump assembly and fluid reservoir.
In this form of FCS, the pilot is no longer physically connected to the control
surfaces. Rather, the pilot modulates the fluid pressure within the lines via a spool valve
connected to the control yoke or stick. This system has the advantage of being able to
generate massive forces to affect the attitude of any aircraft regardless of size or speed.
Hydraulic systems also afford designers greater flexibility in line routing and actuator
placement because there are no requirements for a “line of sight” coupling between the
cockpit and control surfaces.
Hydraulic Control System
Components
While the PFD does not directly use the pitot-static system to physically display flight data, it
still uses the system to make altitude, airspeed, vertical speed, and other measurements
precisely using air pressure and barometric readings. An air data computer analyzes the
information and displays it to the pilot in a readable format. A number of manufacturers
produce PFDs, varying slightly in appearance and functionality, but the information is
displayed to the pilot in a similar fashion.
Layout
The details of the display layout on a primary flight display can vary enormously, depending
on the aircraft, the aircraft's manufacturer, the specific model of PFD, certain settings chosen
by the pilot, and various internal options that are selected by the aircraft's owner (i.e., an
airline, in the case of a large airliner). However, the great majority of PFDs follow a similar
layout convention.
The center of the PFD usually contains an attitude indicator (AI), which gives the pilot
information about the aircraft's pitch and roll characteristics, and the orientation of the
aircraft with respect to the horizon. Unlike a traditional attitude indicator, however, the
mechanical gyroscope is not contained within the panel itself, but is rather a separate device
whose information is simply displayed on the PFD. The attitude indicator is designed to look
very much like traditional mechanical AIs. Other information that may or may not appear on
or about the attitude indicator can include the stall angle, a runway diagram, ILS localizer and
glide-path “needles”, and so on. Unlike mechanical instruments, this information can be
dynamically updated as required; the stall angle, for example, can be adjusted in real time to
reflect the calculated critical angle of attack of the aircraft in its current configuration
(airspeed, etc.). The PFD may also show an indicator of the aircraft's future path (over the
next few seconds), as calculated by onboard computers, making it easier for pilots to
anticipate aircraft movements and reactions.
To the left and right of the attitude indicator are usually the airspeed and altitude indicators,
respectively. The airspeed indicator displays the speed of the aircraft in knots, while the
altitude indicator displays the aircraft's altitude above mean sea level (AMSL). These
measurements are conducted through the aircraft's pitot system, which tracks air pressure
measurements. As in the PFD's attitude indicator, these systems are merely displayed data
from the underlying mechanical systems, and do not contain any mechanical parts (unlike an
aircraft's airspeed indicator and altimeter). Both of these indicators are usually presented as
vertical “tapes”, which scroll up and down as altitude and airspeed change. Both indicators
may often have “bugs”, that is, indicators that show various important speeds and altitudes,
such as V speeds calculated by a flight management system, do-not-exceed speeds for the
current configuration, stall speeds, selected altitudes and airspeeds for the autopilot, and so
on.
The vertical speed indicator, usually next to the altitude indicator, indicates to the pilot how
fast the aircraft is ascending or descending, or the rate at which the altitude changes. This is
usually represented with numbers in "thousands of feet per minute." For example, a
measurement of "+2" indicates an ascent of 2000 feet per minute, while a measurement of "-
1.5" indicates a descent of 1500 feet per minute. There may also be a simulated needle
showing the general direction and magnitude of vertical movement.
At the bottom of the PFD is the heading display, which shows the pilot the magnetic heading
of the aircraft. This functions much like a standard magnetic heading indicator, turning as
required. Often this part of the display shows not only the current heading, but also the
current track (actual path over the ground), current heading setting on the autopilot, and other
indicators.
Other information displayed on the PFD includes navigational marker information, bugs (to
control the autopilot), ILS glideslope indicators, course deviation indicators, altitude
indicator QFE settings, and much more.
Although the layout of a PFD can be very complex, once a pilot is accustomed to it the PFD
can provide an enormous amount of information with a single glance.
Drawbacks
The great variability in the precise details of PFD layout makes it necessary for pilots to study
the specific PFD of the specific aircraft they will be flying in advance, so that they know
exactly how certain data is presented. While the basics of flight parameters tend to be much
the same in all PFDs (speed, attitude, altitude), much of the other useful information
presented on the display is shown in different formats on different PFDs. For example, one
PFD may show the current angle of attack as a tiny dial near the attitude indicator, while
another may actually superimpose this information on the attitude indicator itself. Since the
various graphic features of the PFD are not labelled, the pilot must learn what they all mean
in advance.
A failure of a PFD deprives the pilot of an extremely important source of information. While
backup instruments will still provide the most essential information, they may be spread over
several locations in the cockpit, which must be scanned by the pilot, whereas the PFD
presents all this information on one display. Additionally, some of the less important
information, such as speed and altitude bugs, stall angles, and the like, will simply disappear
if the PFD malfunctions; this may not endanger the flight, but it does increase pilot workload
and diminish situational awareness.
Typical avionics sub systems:
The main avionic sub systems have been grouped into four divisions:
1. Systems, which interface directly with the pilot
2. Aircraft state sensor systems
3. External world sensor systems
4. Task Automation systems
Displays: The display systems provide the visual interface between the pilot and the
aircraft systems and comprise head up display (HUD), Helmet mounted displays (HMD) and
head down display (HDD). Night viewing goggles are also be integrated into the HMD. This
provides the night vision capability enabling the aircraft to operate at night or in conditions of
poor visibility.
Primary Flight Displays information such as height, air speed, mach number, vertical
speed, artificial horizon, pitch angle, bank angle, heading and velocity vector.
Navigation displays such as aircraft position and track relative to the destination or
way points together with the navigational information and distance and time to go. It
also gives Weather radar display information.
Engine data are presented so that the health of the engine can be monitored and any
deviations from the normal can be highlighted.
The aircraft systems such as electrical supply systems, cabin pressurization and fuel
management can be shown easily as line diagram format on multifunction displays.
Data entry and Control: Data entry and control systems are essential for the crew to
interact with the avionic systems. Such systems range from keyboards to touch panel.
Flight Control: Flight control systems use electronic system technology in two areas
namely auto stabilization systems and fly by wire flight control systems. Most combat and
military aircraft require three-axis auto stabilization (pitch,yaw and roll) systems to achieve
acceptable control and handling characteristics across the flight envelope. FBW flight control
enables a lighter, higher performance aircraft to be compared with an equivalent conventional
design to reduce negative natural aerodynamic stability.
Air data systems: Information on the data qualities such as altitude, calibrated speed, vertical
speed, true air speed, Mach number and air stream incidence angle is essential for the control
and navigation of the aircraft air data computing systems calculates these qualities from
various sensors measures static pressure, total pressure, air stream incidence and outside air
temperature.
Inertial sensor systems: The altitude and the heading information are provided by the inertial
sensor systems. These consists of set of gyros and accelerometers, which measure the aircraft
angular and linear motion about the aircraft axis together with a computing system, which
derives aircraft’s altitude and heading from the gyro and accelerometer. These data are
utilized in INS (Inertial Navigation System) to provide aircraft velocity vector information. It
is especially self-contained.
Weather Radar systems: Weather radar is used to detect water droplets and provide warning
of storms, cloud turbulence and severe precipitation so that an aircraft can alter and avoid
such-turbulence conditions. Otherwise, in severe turbulence, the violence of the vertical gusts
can subject the aircraft structure to very high loads and stresses. Modern fighter aircraft uses
sophisticated multi-mode radars to have a ground attack role as well as the prime interception
role. In the ground attack or mapping mode, the radar system is also to generate a map type
display from the radar returns from the ground, enabling specific terrain features to be
identified for position fixing and target acquisition.
Task Automation Systems: The main purpose of these systems is to reduce the crew
workload by automating and manage as many tasks as appropriate so that the crew role is a
supervisory management one. The types are summarized one by one:
1. Navigation Management Systems: It collects the data of all navigation systems such as
GPS and INS to provide the best possible estimation of position, ground and track.
2. Auto pilot and Flight Management Systems: The modern auto pilot systems in addition
to height hold and heading hold can also provides a very precise control of the aircraft flight
path, for example, automatic landing in poor or even zero visibility conditions. In military
applications, auto pilot system in conjunction with a suitable guidance system can provide
automatic terrain following or terrain avoidance. This enables the aircraft to fly at very low
altitudes (100 – 200 ft) so that the aircraft can take advantage of terrain screening and stay
below the radar horizon of enemy radars. The tasks of FMS are:
Flight planning
Navigation Management
Engine control to maintain the planned speed
Control of the aircraft path to follow the optimized planned route
Control of the vertical flight profile
Minimizing the fuel consumption
3. Engine Control Management: Modern jet engines have a Full Authority Digital Engine
Control System (FADEC). This automatically controls the flow of fuel to the engine
combustion chambers by the fuel control unit so as to provide a closed loop control of engine
thrust in response to the throttle command. This ensures temperatures, engine speeds and
accelerations. It has a integrity failure survival control system so that in case of failure,
avoids the damage of the engine.
Fuel Management
Electrical power supply system management
Hydraulic power supply system management
Cabin/cockpit pressurization systems
Warning systems
Environmental control system
Maintenance and monitoring systems
Design approaches:
Design objectives for Avionic Systems:
Systems must be designed with inverse relationship between the probability of the
occurrence of a fault and the severity of its effect. The relationships for the availability of
function can be achieved by the provision of multiple systems and stand by services, which
ensure the capability of detecting failures. The architecture of a system must be designed to
ensure that it contains sophisticated segregation of vital components, so that a single external
failure source does not result in multiple system failures.
Physical and environmental causes can eliminate the use of separate locations for
duplicated equipments. Similarly, electrical power supplies, increasing utilizing digital data
buses, must be configured in such a way that interrupted supply to one bus does not affect the
continual operation of systems concerned with another bus.
Redundancy: For critical systems, a spare unit can be carried either hot or cold spare, the
former is connected to the data bus, ready to be operational in the event of component failure.
Reliability: Two measures of equipment are generally used both related, but dependent on
different factors. The measures are
Mean Time Between Failures (MTBF): It is a component failure, either of the
whole device as far as the airline is concerned, or at part level for the manufacturer or
maintenance organization.
Mean Time Between Unscheduled removals (MTBUR): This refers to the number
of times that a component is removed from the aircraft on the ground of suspended
failure, irrespective of whether it has been subsequently proved to have failed.
Built –In Test Equipment (BITE): These are an integral part of modern avionic design. The
BITE designed to provide a continuous, integrated monitoring system, both in flight and on
the ground, whether power is applied to the aircraft. The purposes are:
Most modern aircraft use cockpit displays and avionics bay read-out to provide access
to the BITE generated data which is retained in the non volatile memory of the aircraft
computer system. These facilities provide post-flight confirmation as well as storage of fault
segment of data, which is useful for further analysis after returning to the main engineering
base.
Automatic Test Equipment (ATE): These test fixtures can be considered as “filtering
devices” often designed to prevent unwarranted flagging of unit as faulty. ATE is designed to
perform number of roles.
Recent Advances:
Advanced avionics systems can automatically perform many tasks that pilots and
navigators previously did by hand. For example, an area navigation (RNAV) or flight
management system (FMS) unit accepts a list of points that define a flight route, and
automatically performs most of the course, distance, time, and fuel calculations. Once en
route, the FMS or RNAV unit can continually track the position of the aircraft with respect to
the flight route, and display the course, time, and distance remaining to each point along the
planned route. An autopilot is capable of automatically steering the aircraft along the route
that has been entered in the FMS or RNAV system.
Advanced avionics perform many functions and replace the navigator and pilot in
most procedures. However, with the possibility of failure in any given system, the pilot must
be able to perform the necessary functions in the event of an equipment failure. Pilot ability
to perform in the event of equipment failure(s) means remaining current and proficient in
accomplishing the manual tasks, maintaining control of the aircraft manually (referring only
to standby or backup instrumentation), and adhering to the air traffic control (ATC) clearance
received or requested. Pilots of modern advanced avionics aircraft must learn and practice
backup procedures to maintain their skills and knowledge.
Risk management principles require the flight crew to always have a backup or
alternative plan, and/or escape route. Advanced avionics aircraft relieve pilots of much of the
minute-to-minute tedium of everyday flights, but demand much more initial and recurrent
training to retain the skills and knowledge necessary to respond adequately to failures and
emergencies. The FMS or RNAV unit and autopilot offer the pilot a variety of methods of
aircraft operation. Pilots can perform the navigational tasks themselves and manually control
the aircraft, or choose to automate both of these tasks and assume a managerial role as the
systems perform their duties. Similarly, information systems now available in the cockpit
provide many options for obtaining data relevant to the flight.
Advanced avionics systems present three important learning challenges as you
develop proficiency:
1. How to operate advanced avionics systems
2. Which advanced avionics systems to use and when
3. How advanced avionics systems affect the pilot and the way the pilot flies
How To Operate Advanced Avionics Systems: The first challenge is to acquire the “how-
to” knowledge needed to operate advanced avionics. These principles and concepts are
illustrated with a range of equipment by different manufacturers. It is very important that the
pilot obtain the manufacturer’s guide for each system to be operated, as only those materials
contain the many details and nuances of those particular systems. Many systems allow
multiple methods of accomplishing a task, such as programming or route selection.
A proficient pilot tries all methods, and chooses the method that works best for that
pilot for the specific situation, environment, and equipment. Not all aircraft are equipped or
connected identically for the navigation system installed. In many instances, two aircraft with
identical navigation units are wired differently. Obvious differences include slaved versus
non-slaved electronic horizontal situation indicators (EHSIs) or primary flight display (PFD)
units. Optional equipment is not always purchased and installed. The pilot should always
check the equipment list to verify what is actually installed in that specific aircraft. It is also
essential for pilots using this handbook to be familiar with, and apply, the pertinent parts of
the regulations and the Aeronautical Information Manual (AIM). Advanced avionics
equipment, especially navigation equipment, is subject to internal and external failure. You
must always be ready to perform manually the equipment functions which are normally
accomplished automatically, and should always have a backup plan with the skills,
knowledge, and training to ensure the flight has a safe ending.
Which Advanced Avionics Systems to Use and When: The second challenge is learning to
manage the many information and automation resources now available to you in the cockpit.
Specifically, you must learn how to choose which advanced cockpit systems to use, and
when. There are no definitive rules. In fact, you will learn how different features of advanced
cockpit avionics systems fall in and out of usefulness depending on the situation. Becoming
proficient with advanced avionics is learning to use the right tool for the right job at the right
time. In many systems, there are multiple methods of accomplishing the same function. The
competent pilot learns all of these methods and chooses the method that works best for the
specific situation, environment, and equipment.
How Advanced Avionics Systems Affect the Pilot: The third challenge is learning how
advanced avionics systems affect the pilot. Because of the limits of human understanding,
together with the quirks present in computerized electronic systems of any kind, you will
learn to expect, and be prepared to cope with, surprises in advanced systems. Avionics
equipment frequently receives software and database updates, so you must continually learn
system functions, capabilities, and limitations.
The Awareness series presents examples of how advanced avionics systems can
enhance pilot awareness of the aircraft systems, position, and surroundings. You will also
learn how (and why) the same systems can sometimes decrease awareness. Many studies
have demonstrated a natural tendency for pilots to sometimes drift out of the loop when
placed in the passive role of supervising an FMS/RNAV and autopilot. Keeping track of
which modes are currently in use and predicting the future behaviour of the systems is
another awareness skill that you must develop to operate these aircraft safely.
The Risk series provides insights on how advanced avionics systems can help you
manage the risk faced in everyday flight situations. Information systems offer the immediate
advantage of providing a more complete picture of any situation, allowing you to make better
informed decisions about potential hazards, such as terrain and weather. Studies have shown
that these same systems can sometimes have a negative effect on pilot risk-taking behaviour.
You will learn about situations in which having more information can tempt you to take more
risk than you might be willing to accept without the information. This series will help you use
advanced information systems to increase safety, not risk. As much as advanced information
systems have improved the information stream to the cockpit, the inherent limitations of the
information sources and timeliness are still present; the systems are not infallible.
When advanced avionics systems were first introduced, it was hoped that those new
systems would eliminate pilot error. Experience has shown that while advanced avionics
systems do help reduce many types of errors, they have also created new kinds of errors. This
handbook takes a practical approach to pilot error by providing two kinds of assistance in the
form of two series: Common Errors and Catching Errors.
The Common Errors series describes errors commonly made by pilots using advanced
avionics systems. These errors have been identified in research studies in which pilots and
flight instructors participated. The Catching Errors series illustrates how you can use the
automation and information resources available in the advanced cockpit to catch and correct
errors when you make them. The Maintaining Proficiency series focuses on pilot skills that
are used less often in advanced avionics. It offers reminders for getting regular practice with
all of the skills you need to maintain in your piloting repertoire.
Ilities of Avionic Requirement:
Capability: It is the capability of the system within the constraints that are exposed.
Reliability: The system must be reliable as possible since higher reliability, generally
leads to lower maintenance cost.
Maintainability: A good system need to have less and easy maintenance. So it must
have built-in test, automated trouble shooting and easy equipment access.
Availability: Systems that are reliable and maintainable will yield high availability of
aircraft. Aircraft that have to be repaired often or take too long to repair are not
contributing to the mission since they are not available to fly.
Retrofit ability: The capability of a new design of equipment to be successfully
installed operated in place of older one that is less capable equipment.
Supportability: A design should use parts and support equipments common to other
systems, so that support cost can be spread across more than on system.
Survivability: Capability of the system to continue to function in the presence of non-
nuclear threat. It is the function of susceptibility and vulnerability.
Susceptibility: A measure of the probability that an object will be hit by a given
threat.
Vulnerability: It is a measure of the probability that damage will occur due to the hit.
Flexibility & Adaptability: These measures describe how easily the system can be
changed or expanded as improvements become available and additional functions are
added.
Certificability: Certification is conducted by the regularity agencies; is based on
detailed, expert examination of all facets of the aircraft design and operation.
Binary to Decimal:
Octal to Decimal:
Hexadecimal to Decimal:
Conversions:
Conversions in fractions:
Answer = 1 0 0 0 1 0 0. 1 0 1 0….
Answer = (2 6 1)8
•Conversion of binary numbers to octal and hex simply requires grouping bits in the binary
numbers into groups of three bits for conversion to octal and into groups of four bits for
conversion to hex.
Binary Addition:
Model 1:
Binary Subtraction:
Model 2:
Model 3:
Binary Multiplication:
Division:
Fundamentals of logic circuits: They are divided into different types:
Cache Memory: The speed of CPU is extremely high compared to the access time of
main memory. Therefore the performance of CPU decreases due to the slow speed of main
memory. To decrease the mismatch in operating speed, a small memory chip is attached
between CPU and Main memory whose access time is very close to the processing speed of
CPU. It is called CACHE memory. CACHE memories are accessed much faster than
conventional RAM. It is used to store programs or data currently being executed or
temporary data frequently used by the CPU. So each memory makes main memory to be
faster and larger than it really is. It is also very expensive to have bigger size of cache
memory and its size is normally kept small.
Registers: The CPU processes data and instructions with high speed; there is also
movement of data between various units of computer. It is necessary to transfer the processed
data with high speed. So the computer uses a number of special memory units called
registers. They are not part of the main memory but they store data or information
temporarily and pass it on as directed by the control unit.
Magnetic Tape: Magnetic tapes are used for large computers like mainframe
computers where large volume of data is stored for a longer time. In PC also you can use
tapes in the form of cassettes. The cost of storing data in tapes is inexpensive. Tapes consist
of magnetic materials that store data permanently. It can be 12.5 mm to 25 mm wide plastic
film-type and 500 meter to 1200 meter long which is coated with magnetic material. The
deck is connected to the central processor and information is fed into or read from the tape
through the processor. It’s similar to cassette tape recorder.
Magnetic Disk: You might have seen the gramophone record, which is circular like a
disk and coated with magnetic material. Magnetic disks used in computer are made on the
same principle. It rotates with very high speed inside the computer drive. Data is stored on
both the surface of the disk. Magnetic disks are most popular for direct access storage device.
Each disk consists of a number of invisible concentric circles called tracks. Information is
recorded on tracks of a disk surface in the form of tiny magnetic spots. The presence of a
magnetic spot represents one bit and its absence represents zero bit. The information stored in
a disk can be read many times without affecting the stored data. So the reading operation is
non-destructive. But if you want to write a new data, then the existing data is erased from the
disk and new data is recorded. For Example-Floppy Disk. These are small removable disks
that are plastic coated with magnetic recording material. Floppy disks are typically 3.5″ in
size (diameter) and can hold 1.44 MB of data. This portable storage device is a rewritable
media and can be reused a number of times. Floppy disks are commonly used to move files
between different computers. The main disadvantage of floppy disks is that they can be
damaged easily and, therefore, are not very reliable.
Hard Disk: Another form of auxiliary storage is a hard disk. A hard disk consists of
one or more rigid metal plates coated with a metal oxide material that allows data to be
magnetically recorded on the surface of the platters. The hard disk platters spin at
5 a high rate of speed, typically 5400 to 7200 revolutions per minute (RPM).Storage
capacities of hard disks for personal computers range from 10 GB to 120 GB (one billion
bytes are called a gigabyte).
Optical Disk: With every new application and software there is greater demand for
memory capacity. It is the necessity to store large volume of data that has led to the
development of optical disk storage medium. Optical disks can be divided into the following
categories:
1. Compact Disk/ Read Only Memory (CD-ROM)
2. Write Once, Read Many (WORM)
3. Erasable Optical Disk
CD: Compact Disk (CD) is portable disk having data storage capacity between 650-
700 MB. It can hold large amount of information such as music, full-motion videos, and text
etc. It contains digital information that can be read, but cannot be rewritten. Separate drives
exist for reading and writing CDs. Since it is a very reliable storage media, it is very often
used as a medium for distributing large amount of information to large number of users. In
fact today most of the software is distributed through CDs.
DVD: Digital Versatile Disk (DVD) is similar to a CD but has larger storage capacity
and enormous clarity. Depending upon the disk type it can store several Gigabytes of data (as
opposed to around 650MB of a CD). DVDs are primarily used to store music or 6 movies and
can be played back on your television or the computer too. They are not rewritable media. Its
also termed DVD (Digital Video Disk), DVD-ROM – Over 4 GB storage (varies with
format), DVD- ROM (read only) – Many recordable formats (e.g., DVD-R, DVD-RW; ..) –
Are more highly compact than a CD. – Special laser is needed to read them.
Blu-ray Technology: The name is derived from the blue-violet laser used to read and
write data. It was developed by the Blu-ray Disc Association with more than 180 members.
Some companies with the technology are Dell, Sony, LG. The Data capacity is very large
because Blu-ray uses a blue laser (405 nanometres) instead of a red laser (650 nanometres)
this allows the data tracks on the disc to be very compact. This allows for more than twice as
small pits as on a DVD. Because of the greatly compact data Bluray can hold almost 5 times
more data than a single layer DVD. Close to 25 GB!.Just like a DVD Blu-ray can also be
recorded in Dual-Layer format. This allows the disk to hold up to 50 GB!!
1. Centralized architecture
2. Federated architecture
3. Distributed architecture
A. Centralized Architecture:
This is the traditional olden architecture. In this the signal conditions and
computations take place in one (or) more computers in a LRU located in the avionics bay.
From the signals are transmitted over one-way data buses. The advantages of this architecture
are:
All computers are located in a readily accessible avionics bay.
The environment for the computers is relatively benign, which simplifies equipment
qualification.
The software is more easily written and validated since there are only a few processor
types and a few large programs that can be physically integrated.
The disadvantages are:
Many long buses to collect and distribute data and commands.
Increased vulnerability to damage from a single hazardous event if it were to occur in
(or) nears the avionic bay.
Partitioning or brick walling is difficult. It is a feature in an architecture that limits a
failure to the subsystem in which it occurred. In addition, the physical or operational
effects of the failure are not allowed to cascade to the rest of the system. Example: a
short in power lead and a bit hang-up.
B. Federal Architecture:
In the early 80’s these are followed. In this architecture, each major system such as
thrust management, sharing input & sensor data from a common set of hardware and
subsequently sharing their computed results over data buses. So this architecture permits the
independent design, configuration and optimization of the major systems while ensuring that
they are using a common data buses. Also the changes in hardware or software are relatively
easy to make.
C. Distributed Architecture:
This has the multiple processors throughout the aircraft that are assigned computing
tasks on a real-time basis as a function of mission phase and/or system basis. Processing is
performed in the sensors or actuators. Their advantages are:
Fewer, shorter buses
Faster program execution
Intrinsic partitioning
Reduced vulnerability (because a hazardous event will not destroy a substantial
fraction of the total capability.
The disadvantages are:
Need more processors, software generation & validation and spares stocking.
Each processor contains a large set of software that performs a variety of functions.
Some processors may be placed in more severe, less accessible environments such as
wings and empennages.
MIL STD 1553B:
Some 1553 applications utilize more than one data bus on a vehicle. This is often done, for
example, to isolate a Stores bus from a Communications bus or to construct a bus system capable of
interconnecting more terminals than a single bus could accommodate. When multiple buses are used,
some terminals may connect to both buses, allowing for communication between them.
Multiplexing:
Multiplexing facilitates the transmission of information along the data flow. It permits the
transmission of several signal sources through one communications system.
BUS:
The bus is made up of twisted-shielded pairs of wires to maintain message integrity. MIL-
STD-1553 specifies that all devices in the system will connect to a redundant pair of buses. This
provides a second path for bus traffic should one of the buses be damaged. Signals are only allowed to
appear on one of the two buses at a time. If a message cannot be completed on one bus, the bus
controller may switch to the other bus. In some applications more than one 1553 bus may be
implemented on a given vehicle. Some terminals on the bus may actually connect to both buses.
BUS COMPONENTS:
There are only three functional modes of terminals allowed on the data bus: the bus
controller, the bus monitor, and the remote terminal. Devices may be capable of more than one
function. Figure 1 illustrates a typical bus configuration.
Bus Controller - The bus controller (BC) is the terminal that initiates information transfers on
the data bus. It sends commands to the remote terminals which reply with a response. The bus will
support multiple controllers, but only one may be active at a time. Other requirements, according to
1553, are: (1) it is "the key part of the data bus system," and (2) "the sole control of information
transmission on the bus shall reside with the bus controller."
Bus Monitor - 1553 defines the bus monitor as "the terminal assigned the task of receiving
bus traffic and extracting selected information to be used at a later time." Bus monitors are frequently
used for instrumentation.
Remote Terminal - Any terminal not operating in either the bus controller or bus monitor
mode is operating in the remote terminal (RT) mode. Remote terminals are the largest group of bus
components.
MODULATION:
The signal is transferred over the data bus using serial digital pulse code modulation.
DATA ENCODING:
The type of data encoding used by 1553 is Manchester II biphase. A logic one (1) is
transmitted as a bipolar coded signal 1/0 (in other words, a positive pulse followed by a negative
pulse). A logic zero (0) is a bipolar coded signal 0/1 (i.e., a negative pulse followed by a positive
pulse).
A transition through zero occurs at the midpoint of each bit, whether the rate is a logic one or a logic
zero. Figure 2 compares a commonly used Non Return to Zero (NRZ) code with the Manchester II
biphase level code, in conjunction with a 1 MHz clock.
WORD FORMATS:
Bus traffic or communications travels along the bus in words. A word in MIL-STD-1553 is a
sequence of 20 bit times consisting of a 3 bit-time sync wave form, 16 bits of data, and 1 parity check
bit. This is the word as it is transmitted on the bus; 1553 terminals add the sync and parity before
transmission and remove them during reception. Therefore, the nominal word size is 16 bits, with the
most significant bit (MSB) first. There are three types of words: command, status, and data. A packet
is defined to have no intermessage gaps. The time between the last word of a controller message and
the return of the terminal status byte is 4-12 microseconds. The time between status byte and the next
controller message is undefined. Figure 3 illustrates these three formats.
COMMAND WORD:
Command words are transmitted only by the bus controller and always consist of:
3 bit-time sync pattern
5 bit RT address field
1Transmit/Receive (T/R) field
5 bit subaddress/mode field
5 bit word count/mode code field
1 parity check bit.
DATA WORD:
Data words are transmitted either by the BC or by the RT in response to a BC request. The
standard allows a maximum of 32 data words to be sent in a packet with a command word before a
status response must be returned.
Data words always consist of:
3 bit-time sync pattern (opposite in polarity from command and status words)
16 bit data field
1 parity check bit.
STATUS WORD:
Status words are transmitted by the RT in response to command messages from the BC and consist of:
3 bit-time sync pattern (same as for a command word)
5 bit address of the responding RT
11 bit status field
1 parity check bit.
The 11 bits in the status field are used to notify the BC of the operating condition of the RT
and subsystem.
INFORMATION TRANSFERS:
Three basic types of information transfers are defined by 1553:
Bus Controller to Remote Terminal transfers
Remote Terminal to Bus Controller transfers
Remote Terminal to Remote Terminal transfers
These transfers are related to the data flow and are referred to as messages. The basic formats of these
messages are shown in Figure 4.
The normal command/response operation involves the transmission of a command from the BC to a
selected RT address. The RT either accepts or transmits data depending on the type (receive/transmit)
of command issued by the BC. A status word is transmitted by the RT in response to the BC
command if the transmission is received without error and is not illegal.
ARINC 429:
The ARINC 429 Specification defines the standard requirements for the transfer of
digital data between avionics systems on commercial aircraft. ARINC 429 is also known as
the Mark 33 DITS Specification. Signal levels, timing and protocol characteristics are defined
for ease of design implementation and data communications on the Mark 33 Digital
Information Transfer System (DITS) bus. ARINC 429 is a privately copy written
specification developed to provide interchange ability and interoperability of line replaceable
units (LRUs) in commercial aircraft. Manufacturers of avionics equipment are under no
requirement to comply to the ARINC 429 Specification, but designing avionics systems to
meet the design guidelines provides cross-manufacturer interoperability between functional
units.
The ARINC 429 Specification establishes how avionics equipment and systems
communicate on commercial aircraft. The specification defines electrical characteristics,
word structures and protocol necessary to establish bus communication. ARINC 429 utilizes
the simplex, twisted shielded pair data bus standard Mark 33 Digital Information Transfer
System bus. ARINC 429 defines both the hardware and data formats required for bus
transmission. Hardware consists of a single transmitter – or source – connected to from 1-20
receivers – or sinks – on one twisted wire pair. Data can be transmitted in one direction only
– simplex communication – with bi-directional transmission requiring two channels or buses.
The devices, line replaceable units or LRUs, are most commonly configured in a star or bus-
drop topology. Each LRU may contain multiple transmitters and receivers communicating on
different buses. This simple architecture, almost point-to-point wiring, provides a highly
reliable transfer of data.
A transmitter may ‘talk only’ to a number of receivers on the bus, up to 20 on one wire pair,
with each receiver continually monitoring for its applicable data, but does not acknowledge
receipt of the data. A transmitter may require acknowledgement from a receiver when large
amounts of data have been transferred. This handshaking is performed using a particular
word style, as opposed to a hard wired handshake. When this two way communication format
is required, two twisted pairs constituting two channels are necessary to carry information
back and forth, one for each direction.
Transmission from the source LRU is comprised of 32 bit words containing a 24 bit
data portion containing the actual information, and an 8 bit label describing the data itself.
LRUs have no address assigned through ARINC 429, but rather have Equipment ID numbers
which allow grouping equipment into systems, which facilitates system management and file
transfers. Sequential words are separated by at least 4 bit times of null or zero voltage. By
utilizing this null gap between words, a separate clock signal is unnecessary. Transmission
rates may be at either a low speed – 12.5 kHz – or a high speed – 100kHz.
Cable Characteristics:
The transmission bus media uses a 78 Ω shielded twisted pair cable. The shield must
be grounded at each end and at all junctions along the bus.
Transmission Characteristics:
ARINC 429 specifies two speeds for data transmission. Low speed operation is stated
at 12.5 kHz, with an actual allowable range of 12 to 14.5 kHz. High speed operation is 100
kHz ± 1% allowed. These two data rates can not be used on the same transmission bus. Data
is transmitted in a bipolar, Return-to-Zero format. This is a tri-state modulation consisting of
HIGH, NULL and LOW states. Transmission voltages are measured across the output
terminals of the source. Voltages presented across the receiver input will be dependent on
line length, stub configuration and the number of receivers connected. The following voltage
levels indicate the three allowable states:
TRANSMIT STATE RECEIVE:
Waveform Parameters:
Pulse rise and fall times are controlled by RC circuits built into ARINC 429 transmitters. This
circuitry minimizes overshoot ringing common with short rise times. Allowable rise and fall
times are shown below for both bit rates. Bit and ½ bit times are also defined.
Word Formats:
ARINC 429 protocol uses a point-to-point format, transmitting data from a single
source on the bus to up to 20 receivers. The transmitter is always transmitting, either data
words or the NULL state. Most ARINC messages contain only one data word consisting of
Binary (BNR), Binary Coded Decimal (BCD) or alphanumeric data encoded using ISO
Alphabet No. 5. File data transfers that send more than one word are also allowed.
ARINC 429 data words are 32 bit words made up of five primary fields:
Parity – 1 bit
Sign/Status Matrix (SSM) – 2 bits
Data – 19 bits
Source/Destination Identifier (SDI) – 2 bits
Label – 8 bits
The only two fields definitively required are the Label and the Parity bit, leaving up to 23 bits
available for higher resolution data representation. Many non-standard word formats have
been adopted by various manufacturers of avionics equipment. Even with the variations
included, all ARINC data is transmitted in 32 bit words. Any unused bits are padded with
zeros.
Parity:
ARINC 429 defines the Most Significant Bit (MSB) of the data word as the Parity bit.
ARINC uses odd parity as an error check to insure accurate data reception. The number of
Logic 1s transmitted in each word is an odd number, with bit 32 being set or cleared to obtain
the odd count. ARINC 429 specifies no means of error correction, only error detection.
Sign/Status Matrix:
Bits 31-30 are assigned as the Sign/Status Matrix field or SSM. Depending on the
words Label, which indicates which type of data is being transmitted, the SSM field can
provide different information. This field can be used to indicate sign or direction of the
words.
Unit 3 – FLIGHT DECK AND COCKPITS
Control and display technologies: CRT, LED, LCD, EL and plasma panel – Touch screen –
Direct voice input (DVI) – Civil and Military Cockpits: MFDS, HUD, MFK, HOTAS.
Introduction:
Since the early 1900s, the Cathode-Ray Tube or CRT (sometimes called the Braun
Tube) has played an important part in displaying images, movies, and information. A patent
was filed in 1938 for the CRT; however, this was a very simple implementation. Over time,
CRTs have advanced employing many different techniques to increase image precision and
quality. While the technology has been around for several decades and is quite mature, it still
has much room for improvement. While today's CRT displays are much more advanced than
those of a decade ago, they are much simpler than other display technologies. This gives the
CRT several important advantages: cheap to manufacture and the ability to display high
quality images. Due to the low cost, CRTs have a very high resolution to price ratio
compared to other displays [Sherman, 2000]. Another important trait is the ability to display
colors with high fidelity. For example, on a CRT, to display black, no color is displayed at a
certain point giving the area a physically black color; in LCD displays, the closest is a
washed out dark gray color.
The purpose of this report is to explain the underlying mechanisms behind CRT
displays used in computer monitors and television displays. More specifically, how CRTs are
used and the technologies that work together to make the CRT function. Construction,
materials (for example phosphors used), and safe disposal methods will not be covered.
While there are many new technologies coming out and maturing, the CRT is forcing them to
compete on price and image quality. This results in a wide variety of high quality display
technologies for the consumer to choose from. The first section contains the main
components of the CRT: the electron gun, the electron beam deflector, and the screen. The
second section explains how the components work together to make the CRT work. The third
section contains a future view of the technology. In conclusion, the strengths and weaknesses
of the CRT will be discussed.
The cathode ray tube (CRT) is a display device that uses electrons fired at phosphors
to create images. The CRT takes input from an external source and displays it, making other
devices, such as computers useful. The CRT consists of three main components: the electron
gun, the electron beam deflector, and the screen and phosphors (Figure 1).
The Electron Gun:
The electron gun fires electrons, which eventually strike the phosphors; this causes
them to display colors. The electron gun consists of two main components, the cathode and
the electron beam focuser. These two parts work together to fire an electron and help
determine where it will go (Figure 2).
The Cathode. The cathode is made of a metal conductor, usually nickel. Different
voltages at G1 and G2 (Figure 2). The difference in voltage between G2 and the cathode
causes a voltage potential, usually between 100 and 1000 Volts, that is large enough to pull
electrons off of the cathode [Sherman, 2000]. As the electrons are pulled off the metal, the
voltage potential accelerates them, until they reach a final velocity. They then travel at a
constant velocity until they reach the screen. The larger the velocity is, the more often
electrons are hitting the screen. This is seen as an increase in the brightness. In order to
increase brightness, the voltage potential is increased, which in turn increases the final
velocity. As electrons are being pulled off of the cathode, the metal causes electrons to give
up. Without any additional mechanisms, the metal would eventually have no more electrons
to be pulled off. The electrons are constantly being replenished so that the metal has more
electrons to be pulled off.
The Electron Beam Deflector:
The electron deflector is positioned at the base of the vacuum tube and controls what
part of the screen the electron strikes (Figure 3).
The electron beam deflector, like the focuser, can rely on either an electromagnetic or
electrostatic mechanism. An electrostatic mechanism operates in the same way as described
earlier, except in this situation it would change the path of the electron rather than focus it.
Most television and computer displays use an electromagnetic coil to control the path of the
electron. The deflector determines the path of the electron. There are really two deflectors,
one that controls the position in the x, or horizontal direction and one in the y, or vertical
direction. A current is run through the coil to control where the electron goes. The
electromagnetic force causes the electron to move towards either the left or the right in the
case of the horizontal deflector. The deflector uses the force to change the path of the electron
to determine what part of the screen the electron hits, but not which phosphor. While the
electron travels through the deflector, it accelerates due to the force applied on the electron.
Acceleration means a change in the velocity over time. In this case, the direction of the
velocity is changing. This new direction determines the path of the electron until it hits the
screen (Figure 4).
The Screen:
The screen, which is at the front of the CRT, is what actually displays the images. In
color CRTs, the screen contains inorganic light-emitting phosphors with three different
colors. When they are struck, they give off energy in the form of photons. When the electron
fired from the gun strikes the screen and hits an atom in the phosphor, it transfers its energy
to an electron in the phosphor. The excited electron then rises to a higher energy level. As the
electron falls, it emits energy as heat and visible light .When the electron rises, it can only
rise to certain positions called an orbital (Figure 5). When it falls, it will always emit the
same frequency of light.
Theory of Operation:
The CRT operates by firing an electron beam at phosphors, which give off light. The
electron beam is generated at the cathode in the electron gun. A potential (voltage) is applied,
which strips off and accelerates the electrons. The electrons then travel to the to the electron
beam focuser. An electrostatic mechanism is used to focus the beam after the beam exits from
the electron gun, it travels to the electron beam deflector. The deflector has two mechanisms,
one to change the vertical direction and one to change the horizontal direction of the beam.
This allows the electron beam to sweep over the entire screen. When an electron in the beam
strikes a phosphor, it excites an electron in the phosphor. After being excited, the electron
then releases the energy it got in a form of visible light, which is always the same for that
phosphor. Phosphors emitting red, blue, and green light form a color image. Figure 6 shows
the overall path of the electron beam. The electron beam is constantly being created and
focused. The refresh rate describes how many times the screen is being redrawn per second
and is usually 65 Hertz.
Plasma Panel:
For the past 75 years, the vast majority of televisions have been built around the same
technology: the cathode ray tube (CRT). In a CRT television, a gun fires a beam of electrons
(negatively-charged particles) inside a large glass tube. The electrons excite phosphor atoms
along the wide end of the tube (the screen), which causes the phosphor atoms to light up. The
television image is produced by lighting up different areas of the phosphor coating with
different colors at different Intensities. Cathode ray tubes produce crisp, vibrant images, but
they do have a serious drawback: They are bulky. In order to increase the screen width in a
CRT set, you also have to increase the length of the tube (to give the scanning electron gun
room to reach all parts of the screen). Consequently, any big-screen CRT television is going
to weigh a ton and take up a sizable chunk of a room. Recently, a new alternative has popped
up on store shelves: the plasma flat panel display. These televisions have wide screens,
comparable to the largest CRT sets, but they are only about 6 inches thick.
But there are a lot of disadvantages for a CRT. The display lacks in clarity, and the
size of the screen is huge. They become bulkier in size with the increase in screen width as
the length f the tube has to be increased accordingly. This, inturn increases the weight.
Plasma display is the best remedy for this. Their wide screen display, small size and
high definition clarity are their greatest advantages.
What is Plasma?
Suppose you apply a voltage onto the gas, the number of electrons increases and
causes an unbalance. These free electrons hit the atoms, knocking loose other electrons. Thus,
with the missing electron, the component gets a more positive charge and so becomes an ion.
Plasma displays mostly make use of the Xenon and neon atoms. When the energy is
liberated during collision, light is produced by them. These light photons are mostly
ultraviolet in nature. Though they are not visible to us, they play a very important factor in
exciting the photons that are visible to us.
In an ordinary TV, high beams of electrons are shot from an electron gun. These
electrons hit the screen and cause the pixels to light up. The TV has three types of composite
pixel colours which are distributed throughout the screen in the same manner. They are red,
green and blue. These colours when mixed in different proportions can form the other
colours. Thus the TV produces all the colours needed.
Two plates of glass are taken between which millions of tiny cells containing gases
like xenon and neon are filled. Electrodes are also placed inside the glass plates in such a way
that they are positioned in front and behind each cell. The rear glass plate has with it the
address electrodes in such a position that they sit behind the cells. The front glass plate has
with it the transparent display electrodes, which are surrounded on all sides by a magnesium
oxide layer and also a dielectric material. They are kept in front of the cell.
As told earlier when a voltage is applied, the electrodes get charged and cause the
ionization of the gas resulting in plasma. This also includes the collision between the ions and
electrons resulting in the emission of photon light.
The working of the pixels has been explained earlier. Each pixel has three composite
coloured sub-pixels. When they are mixed proportionally, the correct colour is obtained.
There are thousands of colours depending on the brightness and contrast of each. This
brightness is controlled with the pulse-width modulation technique. With this technique, it
controls the pulse of the current that flows through all the cells at a rate of thousands of times
per seconds.
1. Plasma displays can be made up to large sizes like 150 inches diagonal.
2. Very low-luminance “dark-room” black level.
3. Very high contrast.
4. The plasma display panel has a thickness of about 2.5 inches, which makes the total
thickness not more than 4 inches.
5. For a 50 inch display, the power consumption increases from (50-400) watts in
accordance with images having darker colours.
6. All displays are sold out in shop mode which consumes more power than the above
described. It can be changed to home mode.
7. Has a life-time of almost 100,000 hours. After this period, the brightness of the TV
reduces to half.
Plasma TV Resolutions
The resolution of a plasma display varies from the early enhanced definition [ED], to
the modern high-definition displays. The most common ED resolutions were 840*480 and
853*480.
With the emergence of HDTV’s the resolution also became higher. The modern plasma TV’s
have a resolution of 1,024*1,024, 1,024*768, 1,280*768, 1,366*768, 1,280*1080, and also
1,920*1,080.
Cathode ray tubes produce crisp, vibrant images, but they do have a serious
drawback: They are bulky. In order to increase the screen width in a CRT set, you also have
to increase the length of the tube (to give the scanning electron gun room to reach all parts of
the screen). Consequently, any big-screen CRT television is going to weigh a ton and take up
a sizable chunk of a room.
Recently, a new alternative has popped up on store shelves: the plasma flat panel
display. These televisions have wide screens, comparable to the largest CRT sets, but they are
only about 6 inches (15 cm) thick. Based on the information in a video signal, the television
lights up thousands of tiny dots (called pixels) with a high-energy beam of electrons. In most
systems, there are three pixel colors -- red, green and blue -- which are evenly distributed on
the screen. By combining these colors in different proportions, the television can produce the
entire color spectrum.
The basic idea of a plasma display is to illuminate tiny colored fluorescent lights to
form an image. Each pixel is made up of three fluorescent lights -- a red light, a green light
and a blue light. Just like a CRT television, the plasma display varies the intensities of the
different lights to produce a full range of colors.
If you introduce many free electrons into the gas by establishing an electrical voltage
across it, the situation changes very quickly. The free electrons collide with the atoms,
knocking loose other electrons. With a missing electron, an atom loses its balance. It has a net
positive charge, making it an ion. In plasma with an electrical current running through it,
negatively charged particles are rushing toward the positively charged area of the plasma, and
positively charged particles are rushing toward the negatively charged area.
In this mad rush, particles are constantly bumping into each other. These collisions
excite the gas atoms in the plasma, causing them to release photons of energy Xenon and
neon atoms, the atoms used in plasma screens, release light photons when they are excited.
Mostly, these atoms release ultraviolet light photons, which are invisible to the human eye.
But ultraviolet photons can be used to excite visible light photons, as we'll see in the next
section.
Both sets of electrodes extend across the entire screen. The display electrodes are
arranged in horizontal rows along the screen and the address electrodes are arranged in
vertical columns. As you can see in the diagram below, the vertical and horizontal electrodes
form a basic grid.
To ionize the gas in a particular cell, the plasma display's computer charges the
electrodes that intersect at that cell. It does this thousands of times in a small fraction of a
second, charging each cell in turn.
When the intersecting electrodes are charged (with a voltage difference between
them), an electric current flows through the gas in the cell. As we saw in the last section, the
current creates a rapid flow of charged particles, which stimulates the gas atoms to release
ultraviolet photons.
The released ultraviolet photons interact with phosphor material coated on the inside
wall of the cell. Phosphors are substances that give off light when they are exposed to other
light. When an ultraviolet photon hits a phosphor atom in the cell, one of the phosphor's
electrons jumps to a higher energy level and the atom heats up. When the electron falls back
to its normal level, it releases energy in the form of a visible light photon.
The phosphors in a plasma display give off colored light when they are excited. Every
pixel is made up of three separate subpixel cells, each with different colored phosphors. One
subpixel has a red light phosphor, one subpixel has a green light phosphor and one subpixel
has a blue light phosphor. These colors blend together to create the overall color of the pixel.
By varying the pulses of current flowing through the different cells, the control
system can increase or decrease the intensity of each subpixel color to create hundreds of
different combinations of red, green and blue. In this way, the control system can produce
colors across the entire spectrum.
The main advantage of plasma display technology is that you can produce a very wide
screen using extremely thin materials. And because each pixel is lit individually, the image is
very bright and looks good from almost every angle. The image quality isn't quite up to the
standards of the best cathode ray tube sets, but it certainly meets most people's expectations.
The biggest drawback of this technology has to be the price. With prices starting at
$4,000 and going all the way up past $20,000, these sets aren't exactly flying off the shelves.
But as prices fall and technology advances, they may start to edge out the old CRT sets. In
the near future, setting up a new TV might be as easy as hanging a picture!
TOUCHSCREEN
A touchscreen is an electronic visual display that can detect the presence and location
of a touch within the display area. The term generally refers to touching the display of the
device with a finger or hand. Touchscreens can also sense other passive objects, such as a
stylus. Touchscreens are common in devices such as all-in-one computers, tablet computers,
and smartphones.The touchscreen has two main attributes. First, it enables one to interact
directly with what is displayed, rather than indirectly with a cursor controlled by a mouse or
touchpad. Secondly, it lets one do so without requiring any intermediate device that would
need to be held in the hand. Such displays can be attached to computers, or to networks as
terminals. They also play a prominent role in the design of digital appliances such as the
personal digital assistant (PDA), satellite navigation devices, mobile phones, and video
games.
Touchscreens are popular in hospitality, and in heavy industry, as well as kiosks such
as museum displays or room automation, where keyboard and mouse systems do not allow a
suitably intuitive, rapid, or accurate interaction by the user with the display's content.
Technologies
There are a variety of touchscreen technologies.
Resistive
Surface acoustic wave (SAW) technology uses ultrasonic waves that pass over the
touchscreen panel. When the panel is touched, a portion of the wave is absorbed. This change
in the ultrasonic waves registers the position of the touch event and sends this information to
the controller for processing. Surface wave touchscreen panels can be damaged by outside
elements. Contaminants on the surface can also interfere with the functionality of the
touchscreen.
Capacitive
Surface capacitance
In this basic technology, only one side of the insulator is coated with a conductive
layer. A small voltage is applied to the layer, resulting in a uniform electrostatic field. When
a conductor, such as a human finger, touches the uncoated surface, a capacitor is dynamically
formed. The sensor's controller can determine the location of the touch indirectly from the
change in the capacitance as measured from the four corners of the panel. As it has no
moving parts, it is moderately durable but has limited resolution, is prone to false signals
from parasitic capacitive coupling, and needs calibration during manufacture. It is therefore
most often used in simple applications such as industrial controls and kiosks.
Projected capacitance
The greater resolution of PCT allows operation without direct contact, such that the
conducting layers can be coated with further protective insulating layers, and operate even
under screen protectors, or behind weather and vandal-proof glass. Due to the top layer of a
PCT being glass, PCT is a more robust solution versus resistive touch technology. Depending
on the implementation, an active or passive stylus can be used instead of or in addition to a
finger. This is common with point of sale devices that require signature capture. Gloved
fingers may or may not be sensed, depending on the implementation and gain settings.
Conductive smudges and similar interference on the panel surface can interfere with the
performance. Such conductive smudges come mostly from sticky or sweaty finger tips,
especially in high humidity environments. Collected dust, which adheres to the screen due to
the moisture from fingertips can also be a problem. There are two types of PCT: Self
Capacitance and Mutual Capacitance.
Mutual capacitance
In mutual capacitive sensors, there is a capacitor at every intersection of each row and
each column. A 16-by-14 array, for example, would have 224 independent capacitors. A
voltage is applied to the rows or columns. Bringing a finger or conductive stylus close to the
surface of the sensor changes the local electrostatic field which reduces the mutual
capacitance. The capacitance change at every individual point on the grid can be measured to
accurately determine the touch location by measuring the voltage in the other axis. Mutual
capacitance allows multi-touch operation where multiple fingers, palms or styli can be
accurately tracked at the same time.
Self-capacitance
Self-capacitance sensors can have the same X-Y grid as mutual capacitance sensors,
but the columns and rows operate independently. With self-capacitance, the capacitive load
of a finger is measured on each column or row electrode by a current meter. This method
produces a stronger signal than mutual capacitance, but it is unable to resolve accurately
more than one finger, which results in "ghosting", or misplaced location sensing.
Infrared
Infrared sensors mounted around the display watch for a user's touchscreen input on
this PLATO V terminal in 1981. The monochromatic plasma display's characteristic orange
glow is illustrated.
An infrared touchscreen uses an array of X-Y infrared LED and photodetector pairs
around the edges of the screen to detect a disruption in the pattern of LED beams. These LED
beams cross each other in vertical and horizontal patterns. This helps the sensors pick up the
exact location of the touch. A major benefit of such a system is that it can detect essentially
any input including a finger, gloved finger, stylus or pen. It is generally used in outdoor
applications and point of sale systems which can't rely on a conductor (such as a bare finger)
to activate the touchscreen. Unlike capacitive touchscreens, infrared touchscreens do not
require any patterning on the glass which increases durability and optical clarity of the overall
system.
Optical imaging
Introduced in 2002 by 3M, this system uses sensors to detect the mechanical energy in
the glass that occurs due to a touch. Complex algorithms then interpret this information and
provide the actual location of the touch.[14] The technology claims to be unaffected by dust
and other outside elements, including scratches. Since there is no need for additional
elements on screen, it also claims to provide excellent optical clarity. Also, since mechanical
vibrations are used to detect a touch event, any object can be used to generate these events,
including fingers and stylus. A downside is that after the initial touch the system cannot
detect a motionless finger.
Construction
There are several principal ways to build a touchscreen. The key goals are to
recognize one or more fingers touching a display, to interpret the command that this
represents, and to communicate the command to the appropriate application.
In the most popular techniques, the capacitive or resistive approach, there are typically
four layers;
1. Top polyester coated with a transparent metallic conductive coating on the bottom
2. Adhesive spacer
3. Glass layer coated with a transparent metallic conductive coating on the top
4. Adhesive layer on the backside of the glass for mounting.
When a user touches the surface, the system records the change in the electrical current
that flows through the display.
In each case, the system determines the intended command based on the controls showing
on the screen at the time and the location of the touch.
Development
Most touchscreen technology patents were filed during the 1970s and 1980s and have
expired. Touchscreen component manufacturing and product design are no longer
encumbered by royalties or legalities with regard to patents and the use of touchscreen-
enabled displays is widespread.
The development of multipoint touchscreens facilitated the tracking of more than one
finger on the screen; thus, operations that require more than one finger are possible. These
devices also allow multiple users to interact with the touchscreen simultaneously.
With the growing use of touchscreens, the marginal cost of touchscreen technology is
routinely absorbed into the products that incorporate it and is nearly eliminated.
Touchscreens now have proven reliability. Thus, touchscreen displays are found today in
airplanes, automobiles, gaming consoles, machine control systems, appliances, and handheld
display devices including the Nintendo DS and the later multi-touch enabled iPhones; the
touchscreen market for mobile devices is projected to produce US$5 billion in 2009.
The ability to accurately point on the screen itself is also advancing with the emerging
graphics tablet/screen hybrids.
Screen protectors
Some touchscreens, primarily those employed in smartphones, use transparent plastic
protectors to prevent any scratches that might be caused by day-to-day use from becoming
permanent.
A touch screen is a computer display screen that is also an input device. The screens are
sensitive to pressure; a user interacts with the computer by touching pictures or words on the
screen.
Resistive: A resistive touch screen panel is coated with a thin metallic electrically
conductive and resistive layer that causes a change in the electrical current which is
registered as a touch event and sent to the controller for processing. Resistive touch
screen panels are generally more affordable but offer only 75% clarity and the layer
can be damaged
Learn More
Surface wave: Surface wave technology uses ultrasonic waves that pass over the
touch screen panel. When the panel is touched, a portion of the wave is absorbed. This
change in the ultrasonic waves registers the position of the touch event and sends this
information to the controller for processing. Surface wave touch screen panels are the
most advanced of the three types, but they can be damaged by outside elements.
Capacitive: A capacitive touch screen panel is coated with a material that stores
electrical charges. When the panel is touched, a small amount of charge is drawn to
the point of contact. Circuits located at each corner of the panel measure the charge
and send the information to the controller for processing. Capacitive touch screen
panels must be touched with a finger unlike resistive and surface wave panels that can
use fingers and stylus. Capacitive touch screens are not affected by outside elements
and have high clarity.
THE MULTI FUNCTION KEYBOARD (MFK) is an avionics sub-system through which
the pilot interacts to configure mission related parameters like flight plan, airfield database
and communication equipment during initialization and operation flight phase of mission.
The MFK consists of a MOTOROLA 68000 series processor with ROM, RAM and
EEPROM memory. It is connected to one of the 1553B buses used for data communication.
It is also connected to the Multi Function Rotary switch (MFR) through a RS422 interface.
The MFK has a built-in display unit and a keyboard. The display unit is a pair of LCD based
Colour Graphical Display, as well as a Monochrome Heads-Up Display. The Real-time
operating specifications are very stringent in such applications because the performance
and safety of the aircraft depend on it. Efficient design of the architecture and code is
required for successful operation.
Technology Highlights:
1. pSOS Real- Time OS,
2. 68000 Processor,
3. C and Assembly code,
4. 1553B Bus Protocol
A multifunction control is a panel made up of several multifunction switches; each
switch is capable of performing more than one function. If the switches are push buttons or
keys, the device is called a multifunction keyboard (MFK). Each switch is capable of
inputting differentbits of information due to the implementation of a logic network. Thus, it is
essential that the pilot know the significance of each switch actuation. To accomplish this, the
legend for each switch must be appropriate to the function it is serving at the time. Projection
switch hardware changes a legend on the switch itself. Other mechanizations, e.g., plasma
panels, change a legend on a display surface adjacent to the switch. No matter what the type
of mechanization, the essential features of the MFK remain the same. Dedicated, single
purpose master switches enable the pilot to establish an initial set of capabilities for the
multifunction switches. Then, the multifunction switches allow the pilot to perform specific
operations. For example, a plasma panel version of an MFK is show In Figure 1.
Across the top of the display surface are nine dedicated master switches. The
multifunction switches are mounted in columns on the left and right portions of the bezel and
have legends on them. Each legend appears on the plasma panel next to the switch. The
number of these legends for each switch is limited only-.by the memory in digital computer.
In Figure 1, the master switch labelled COMMV (for commutations), has been selected.
Therefore, the legends appearing next to the multifunction switches indicate a variety of
communication radios which the pilot may wish to control. The next step would be to select
the s-ecific radic to be operated. This selection would change-the legend is to appropriate
titles for the multifunction Switches and would allow the pilot to turn on the radio, change
frequency or whatever. Each manage of switch function is called a logic level. The MFK
provides tremendous freedom for the cockpit designer in that he can allocate a number of
functions to a single control panel and, thus, reduce the number of control heads and switches
in the cockpit). This design helps the pilot by providing a single, easily reachable.
2. PURPOSE:
The MFK has been designed to integrate the many dedicated control functions found
in present day cockpits into a more efficient arrangement. The purpose of this study was to
examine pilot performance changes while operating MFKs during simulated flight. The
following specific factors related to MFK operation were investigated.
a. MFK Hardware Type: Three types of keyboards were used. The main thrust of the
investigation was to compare two of these (projection switch MFK with a plasma panel
MFK). For each task, one of the MFKs was mounted on the front panel and the other MFK
was mounted on the right console. Both MFKs were evaluated in both locations (Figures 2).
Each of these two keyboards was used in conjunction with a dedicated third keyboard for
some tasks. This dedicated keyboard included switches for mode selection and for digit entry;
hence, it was referred to as a Digit/Mode Panel. It was always located on the left console (see
Figures 2).
Each MFK had the capability for all four levels of systems control and information,
whereas the Digit/Mode Panel had only the capability to function for logic levels 1 and/or 4.
This study examined logic level arrangements in terms of whether operatories of the four
logic levels should be performed on one keyboard only (one of the MFKs) or two keyboards
(divided between one of the MFKs and the Digit/Mode Panel.
c. Control Stick Location: Another factor investigated in relation to MFK operation was the
effect of both a center and side control stick location on the operation of the MFKs. The
distinction should be made that it was not the intent of this study to evaluate differences in
stick location, but rather the effects of stick location on MFK operation. The MFK being
evaluated in the front location was designated as "primary" for a task. When failures of the
primary MFK were introduced, the other MFK, mounted on the right console, was used as a
"backup". It was expected that a center stick would tend to interfere with the primary MFK
and that a side stick would tend to interfere with the backup MFK.
d. Degraded Mode Performance between MFKs: One crucial drawback to the MFK is the
loss of capability with keyboard failure. This study dealt specifically with this problem in that
it studied the operation of backup MFKs to be used when the primary keyboard fails.
Operation during normal modes involved either the front instrument panel MFK or the front
instrument panel MFK and the Digit/Mode Panel. Failed modes involved operation of either
the riqht console MFK or the right console MFK and the Digit/Mode Panel. Failures were
initiated only between task events. During a failed mode, either the front panel MFK or the
Digit/Mode Panel, became inoperative. The logic levels that had been on that keyboard then
became operable on the right console backup MFK. (Both the projection switch and plasma
panel type 1FK were examined in the right console location during failed conditions.
HOTAS is a shorthand term which refers to the pattern of controls in the modern
fighter aircraft cockpit. Having all switches on the stick and throttle allows the pilot to keep
his "hands on throttle-and-stick", thus allowing him to remain focused on more important
duties than looking for controls in the cockpit. The goal is to improve the pilot's situational
awareness, his ability to manipulate switch and button controls in turbulence, under stress, or
during high G-force maneuvers, to improve his reaction time, to minimize instances when he
must remove his hands from one or the other of the aircraft's controls to use another aircraft
system, and total time spent doing so.
The concept has also been applied to the steering wheels of modern open-wheel
racecars, like those used in Formula One and the Indy Racing League. HOTAS has been
adapted for game controllers used for flight simulators (most such controllers are based on
the F-16 Fighting Falcon's) and in cars equipped with radio controls on the steering wheel. In
the modern military aircraft cockpit the HOTAS concept is sometimes enhanced by the use of
Direct Voice Input to produce the so-called "V-TAS" concept, and augmented with helmet
mounted display systems such as the "Schlem" used in the MiG-29 and Su-27, which allow
the pilot to control various systems using his line of sight, and to guide missiles by simply
looking at the target.
The A-10's HOTAS control stick, in conjunction with the throttle, are the primary
pilot controls of the aircraft. The control stick's primary function is to provide pitch and roll
commands to the aircraft's flight control system. It also has several pushbuttons and hat
switches used to control various aircraft functions. Many of the control stick's controls have
different functions depending on the aircraft's current master mode and sensor of interest.
Some also have different functions depending on whether they are clicked once or held down.
Controls
The A-10 HOTAS reference PDF contains a summary of all of the A-10's HOTAS
switches and their functions.
Trigger
The trigger is located at the normal position in front of the control stick. It is a two-
stage, gun-style trigger which is used to control the aircraft's cannon and precision attitude
control (PAC) system. Depressing the trigger's first stage activates the PAC system, and
depressing the second stage fires the aircraft's GAU-8 Avenger cannon.
The master mode control button (MMCB) is a small grey pushbutton located on the
right side of the control stick, which is used to set the master mode of the IFFCC and HUD.
Short button presses will cycle the HUD mode between NAV, GUNS, CCIP, and CCRP
modes. A long button press selects A-A mode, regardless of the currently selected mode.
The weapon release button is a red pushbutton located on the back of the control stick,
to the left of the trim switch. It is used to fire the currently selected weapon, based on master
mode and DSMS settings. Some weapons require the button to be held for a second before
the weapon is released.
The trim hat switch is located immediately to the right of the weapon release button. It
is used to adjust the aircraft's pitch and roll trim, relieving control forces on the stick and
reducing the pilot's workload. Pressing the switch forward adjusts the pitch trim nose-down,
and pulling it back adjusts the pitch trim nose-up. Moving the switch to the left or right
adjusts the roll trim in the corresponding direction.
Yaw trim is not controlled by the trim hat switch - it is set using a knob on the SAS control
panel.
The data management switch (DMS) is used to control various functions of the
current sensor of interest. The functions activated by the DMS are dependent on the current
SOI, as well as the length of time that the DMS is held down. The table below summarizes
these functions.
Like the DMS, the target management switch (TMS) controls various functions of the
current SOI. The selected function is dependent on the current SOI, as well as the length of
time that the TMS is held down. The table below summarizes these functions.
Countermeasures switch
The countermeasures switch (CMS) is located on the left side of the control stick grip,
roughly where the pilot's thumb is located. It is used to control the operation of the aircraft's
countermeasure set, allowing the pilot to quickly choose, activate, and deactivate
countermeasure programs. It has the following functions:
In addition to the four directions, the entire switch may be pressed in to toggle the jammer
function of the AN/ALQ-131 electronic countermeasures pod.
The nosewheel steering button is a small grey button located at the front bottom of the
control stick (intended to be pressed by the pilot's pinky). On ground, the NWS button
toggles the nosewheel steering system, allowing the rudder pedals to control the aircraft on
the ground. In air, it is used to trigger the targeting pod's laser, and also to disconnect from
the tanker during in-air refueling.
DIRECT VOICE INPUT (DVI): (sometimes called voice input control (VIC)) is a style of
human–machine interaction "HMI" in which the user makes voice commands to issue
instructions to the machine. It has found some usage in the design of the cockpits of several
modern military aircraft, particularly the Eurofighter Typhoon, the F-35 Lightning II, the
Dassault Rafale and the JAS 39 Gripen, having been trialled on earlier fast jets such as the
Harrier AV-8B and F-16 VISTA. A study has also been undertaken by the Royal Netherlands
Air Force using voice control in a F-16 simulator. The USAF initially wanted DVI for the
Lockheed Martin F-22 Raptor, but it was finally judged too technically risky and was
abandoned. DVI systems may be "user-dependent" or "user-independent". User-dependent
systems require a personal voice template to be created by the pilot which must then be
loaded onto the aircraft before flight. User-independent systems do not require any personal
voice template and will work with the voice of any user.
In 2006 Zon and Roerdink, at the National Aerospace Laboratory in the Netherlands,
examined the use of Direct Voice Input in the "GRACE" simulator, in an experiment in
which twelve pilots participated. Although the hardware performed well, the researchers
discovered that, before installation in a real aircraft their DVI system would need some
improvement, since operation of the DVI took more time than the existing manual method.
They recommended that:
They suggested that all of these issues were of a technological nature and thus seemed easible
to solve. They concluded that in cockpits, especially during emergencies where pilots have to
operate the entire aircraft on their own, a DVI system might be very relevant. During other
situations it seemed to be interesting but not of crucial importance.
UNIT IV INTRODUCTION TO NAVIGATION SYSTEMS
Radio navigation – ADF, DME, VOR, LORAN, DECCA, OMEGA, ILS, MLS – Inertial
Navigation Systems (INS) – Inertial sensors, INS block diagram – Satellite navigation
systems – GPS.
Navigation Systems:
A relatively modern Boeing 737 Flight Management System (FMS) flight deck unit, which automates many air navigation tasks
The US began the GPS project in 1973 to overcome the limitations of previous navigation systems,
integrating ideas from several predecessors, including a number of classified engineering design
[2]
studies from the 1960s. The U.S. Department of Defence (DoD) developed the system, which
originally used 24 satellites. It became fully operational in 1995. Bradford Parkinson, Roger L.
Easton, and Ivan A. Getting are credited with inventing it.
Advances in technology and new demands on the existing system have now led to efforts to
modernize the GPS system and implement the next generation of GPS Block IIIA satellites and Next
Generation Operational Control System (OCX). Announcements from Vice President Al Gore and
the White House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the
modernization effort, GPS III.
In addition to GPS, other systems are in use or under development. The Russian Global Navigation
Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from
incomplete coverage of the globe until the mid-2000s. There are also the planned European
UnionGalileo positioning system, India's Indian Regional Navigation Satellite System, and the
Chinese BeiDou Navigation Satellite System.
Fundamentals
The GPS concept is based on time. The satellites carry very stable atomic clocks that are synchronized
to each other and to ground clocks. Any drift from true time maintained on the ground is corrected
daily. Likewise, the satellite locations are monitored precisely. GPS receivers have clocks as well—
however, they are not synchronized with true time, and are less stable. GPS satellites continuously
transmit their current time and position. A GPS receiver monitors multiple satellites and solves
equations to determine the exact position of the receiver and its deviation from true time. At a
minimum, four satellites must be in view of the receiver for it to compute four unknown quantities
(three position coordinates and clock deviation from satellite time).
The typical TACAN onboard user panel has control switches for setting the channel (corresponding to
the desired surface station's assigned frequency), the operation mode for either Transmit/Receive
(T/R, to get both bearing and range) or Receive Only (REC, to get bearing but not range). Capability
was later upgraded to include an Air-to-Air mode (A/A) where two airborne users can get relative
slant-range information. Depending on the installation, Air-to-Air mode may provide range, closure
(relative velocity of the other unit), and bearing, though an air-to-air bearing is noticeably less precise
than a ground-to-air bearing.
Operation
TACAN in general can be described as the military version of the VOR/DME system. It operates in
the frequency band 960-1215 MHz. The bearing unit of TACAN is more accurate than a standard
VOR since it makes use of a two-frequency principle, with 15 Hz and 135 Hz components, and
because UHF transmissions are less prone to signal bending than VHF.
The distance measurement component of TACAN operates with the same specifications as civil
DMEs. Therefore to reduce the numbers of required stations, TACAN stations are frequently co-
located with VOR facilities. These co-located stations are known as VORTACs. This is a station
composed of a VOR for civil bearing information and a TACAN for military bearing information and
military/civil distance measuring information. The TACAN transponder performs the function of a
DME without the need for a separate, co-located DME. Because the rotation of the antenna creates a
large portion of the azimuth (bearing) signal, if the antenna fails, the azimuth component is no longer
available and the TACAN downgrades to a DME only mode.
Radar
Radar is an object-detection system that uses radio waves to determine the range, altitude, direction,
or speed of objects. It can be used to detect aircraft, ships, spacecraft, guided missiles, motor
vehicles, weather formations, and terrain. The radar dish (or antenna) transmits pulses of radio waves
or microwaves that bounce off any object in their path. The object returns a tiny part of the wave's
energy to a dish or antenna that is usually located at the same site as the transmitter.
Radar was secretly developed by several nations before and during World War II. The
term RADAR was coined in 1940 by the United States Navy as an acronym for Radio
Detection And Ranging. The term radar has since entered English and other languages as a common
noun, losing all capitalization.
The modern uses of radar are highly diverse, including air and terrestrial traffic control, radar
astronomy, air-defense systems, antimissile systems; marine radars to locate landmarks and other
ships; aircraft anticollision systems; ocean surveillance systems, outer space surveillance
and rendezvous systems; meteorological precipitation monitoring; altimetry and flight control
systems; guided missile target locating systems; and ground-penetrating radar for geological
observations. High tech radar systems are associated with digital signal processing and are capable of
extracting useful information from very high noise levels.
Other systems similar to radar make use of other parts of the electromagnetic spectrum. One example
is "lidar", which uses ultraviolet, visible, or near infrared light from lasers rather than radio waves.
Principles
Radar receivers are usually, but not always, in the same location as the transmitter. Although the
reflected radar signals captured by the receiving antenna are usually very weak, they can be
strengthened by electronic amplifiers. More sophisticated methods of signal processing are also used
in order to recover useful radar signals.
The weak absorption of radio waves by the medium through which it passes is what enables radar sets
to detect objects at relatively long ranges—ranges at which other electromagnetic wavelengths, such
as visible light, infrared light, andultraviolet light, are too strongly attenuated. Such weather
phenomena as fog, clouds, rain, falling snow, and sleet that block visible light are usually transparent
to radio waves. Certain radio frequencies that are absorbed or scattered by water vapour, raindrops, or
atmospheric gases (especially oxygen) are avoided in designing radars, except when their detection is
intended.
Principles of Operation
The Decca Navigator System consisted of a number of land-based radio beacons organised
into chains. Each chain consisted of a master station and three (occasionally two) slave stations,
termed Red, Green and Purple. Ideally, the slaves would be positioned at the vertices of an equilateral
triangle with the master at the centre. The baseline length, that is, the master-slave distance, was
typically 60–120 nautical miles (110–220 km).
Each station transmitted a continuous wave signal that, by comparing the phase difference of the
signals from the Master and one of the Slaves, resulted in a set of hyperbolic lines of position called
a pattern. As there were three Slaves there were three patterns, termed Red, Green and Purple. The
patterns were drawn on nautical charts as a set of hyperbolic lines in the appropriate colour. Receivers
identified which hyperbola they were on and a position could be plotted at the intersection of
the hyperbola from different patterns, usually by using the pair with the angle of cut closest to
orthogonal as possible.
Detailed Principles of Operation:
When two stations transmit at the same phase-locked frequency, the difference in phase between
the two signals is constant along a hyperbolic path. Of course, if two stations transmit on the
same frequency, it is practically impossible for the receiver to separate them; so instead of all
stations transmitting at the same frequency, each chain was allocated a nominal frequency, 1f,
and each station in the chain transmitted at a harmonic of this base frequency, as follows:
Station Harmonic Frequency (kHz)
Master 6f 85.000
Purple Slave 5f 70.833
Red Slave 8f 113.333
Green Slave 9f 127.500
The frequencies given are those for Chain 5B, known as the English Chain, but all chains used similar
frequencies between 70 kHz and 129 kHz.
Decca receivers multiplied the signals received from the Master and each Slave by different values to
arrive at a common frequency (least common multiple, LCM) for each Master/Slave pair, as
follows:
Slave Slave Master Master Common
Pattern
Harmonic Multiplier Harmonic Multiplier Frequency
Purple 5f ×6 6f ×5 30f
Red 8f ×3 6f ×4 24f
Green 9f ×2 6f ×3 18f
It was phase comparison at this common frequency that resulted in the hyperbolic lines of position.
The interval between two adjacent hyperbolas on which the signals are in phase was called a lane.
Since the wavelength of the common frequency was small compared with the distance between the
Master and Slave stations there were many possible lines of position for a given phase difference, and
so a unique position could not be arrived at by this method.
Other receivers, typically for aeronautical applications, divided the transmitted frequencies down to
the basic frequency (1f) for phase comparison, rather than multiplying them up to the LCM frequency.
An instrument approach procedure chart (or 'approach plate') is published for each ILS approach to
provide the information needed to fly an ILS approach during instrument flight rules (IFR) operations.
A chart includes the radio frequencies used by the ILS components or nav aids and the prescribed
minimum visibility requirements.
Principle of operation
An aircraft approaching a runway is guided by the ILS receivers in the aircraft by performing
modulation depth comparisons. Many aircraft can route signals into the autopilot to fly the approach
automatically. An ILS consists of two independent sub-systems. The localizer provides lateral
guidance; the glide slope provides vertical guidance.
Localizer:
A localizer is an antenna array normally located beyond the approach end of the runway and generally
consists of several pairs of directional antennas. Two signals are transmitted on one of 40 ILS
channels. One is modulated at 90 Hz, the other at 150 Hz. These are transmitted from co-located
antennas. Each antenna transmits a narrow beam, one slightly to the left of the runway centreline, the
other slightly to the right.
The localizer receiver on the aircraft measures the difference in the depth of modulation (DDM) of the
90 Hz and 150 Hz signals. The depth of modulation for each of the modulating frequencies is 20
percent when the receiver is on the centreline. The difference between the two signals varies
depending on the deviation of the approaching aircraft from the centreline.
If there is a predominance of either 90 Hz or 150 Hz modulation, the aircraft is off the centreline. In
the cockpit, the needle on the instrument part of the ILS (the omni-bearing indicator (nav
indicator), horizontal situation indicator (HSI), or course deviation indicator (CDI)) shows that the
aircraft needs to fly left or right to correct the error to fly toward the centre of the runway. If the DDM
is zero, the aircraft is on the LOC centreline coinciding with the physical runway centreline. The pilot
controls the aircraft so that the indicator remains centered on the display (i.e., it provides lateral
guidance). Full-scale deflection of the instrument corresponds to a DDM of 15.5%.
Glide slope, or path (GS, or GP):
A glide slope station uses an antenna array sited to one side of the runway touchdown zone. The GS
signal is transmitted on a carrier frequency using a technique similar to that for the localizer. The
centre of the glide slope signal is arranged to define a glide path of approximately 3° above horizontal
(ground level). The beam is 1.4° deep (0.7° below the glide-path centre and 0.7° above).
The pilot controls the aircraft so that the glide slope indicator remains centered on the display to
ensure the aircraft is following the glide path to remain above obstructions and reach the runway at
the proper touchdown point (i.e., it provides vertical guidance).
Outer marker
The outer marker is normally located 7.2 kilometres (3.9 nmi; 4.5 mi) from the threshold, except that
where this distance is not practical, the outer marker may be located between 6.5 and 11.1 kilometres
(3.5 and 6.0 nmi; 4.0 and 6.9 mi) from the threshold. The modulation is repeated Morse-style dashes
of a 400 Hz tone (--) ("M"). The cockpit indicator is a blue lamp that flashes in unison with the
received audio code. The purpose of this beacon is to provide height, distance, and equipment
functioning checks to aircraft on intermediate and final approach. In the United States, a NDB is often
combined with the outer marker beacon in the ILS approach (called a Locator Outer Marker, or
LOM). In Canada, low-powered NDBs have replaced marker beacons entirely.
Middle marker
amber middle marker
The middle marker should be located so as to indicate, in low visibility conditions, themissed
approach point, and the point that visual contact with the runway is imminent, ideally at a distance of
approximately 3,500 ft (1,100 m) from the threshold. The modulation is repeated alternating Morse-
style dots and dashes of a 1.3 kHz tone at the rate of two per second (·-·-) ("Ä" or "AA"). The cockpit
indicator is an amber lamp that flashes in unison with the received audio code. In the United States,
middle markers are not required so many of them have been decommissioned.
Inner marker
The inner marker, when installed, shall be located so as to indicate in low visibility conditions the
imminence of arrival at the runway threshold. This is typically the position of an aircraft on the ILS as
it reaches Category II minima, ideally at a distance of approximately 1,000 ft (300 m) from the
threshold. The modulation is repeated Morse-style dots at 3 kHz (····) ("H"). The cockpit indicator is a
white lamp that flashes in unison with the received audio code.
DME is similar to secondary radar, except in reverse. The system was a post-war development of the
IFF (identification friend or foe) systems of World War II. To maintain compatibility, DME is
functionally identical to the distance measuring component of TACAN.
Knowledge of the aircraft’s position is a basic requirement for air navigation and one means
of satisfying this requirement is to present the pilot with bearing and distance information.
Bearing information may be derived in a variety of ways, some of which are via VOR or
ADF systems. Distance information may be derived from radar or by DME, which is a form
of radar.
In primary radar a short pulse is transmitted and the time interval from transmission to
reception of the reflected pulse is measured. As the speed of an electromagnetic pulse
through the atmosphere is 300 000 kilometres per second, or one nautical mile in 6.2 micro-
seconds, the distance between the transmitter and the target can be calculated. In the case of
radar sited on the ground an aircraft target may be easily identified and the distance measured
readily due to its relative freedom from other reflecting objects. If radar is installed in an
aircraft, precise identification of specific ground targets, for all practical purposes, is very
difficult to effect due to mass reflection from surrounding objects. Hence primary radar is
supplemented by additional equipment at the target to enable distance to be reliably measured
to the necessary degree of accuracy. When primary radar is supplemented to accomplish this
task it then becomes a form of secondary radar.
In secondary radar, pulses known as interrogation pulses are transmitted and when received at
the target they are passed through a ‘gate’ and then trigger transmission of reply pulses back
to the initial source where the time interval may be measured and displayed as distance. The
‘gate’ in the target receiver is an electronic device which is preset to receive only matching
pulses.
In the DME system the interrogating equipment, known as the ‘Interrogator’, is installed in
the aircraft and the target, located on the ground, is referred to as the ‘Transponder’ or
‘Ground Beacon’.
DME complies with the standards prescribed by the International Civil Aviation Organisation
(ICAO) and is installed at all international airports, at all capital city airports and many
regional airports in Australia and along routes serving international traffic. It was developed
from a composite distance and bearing facility known as ‘Tactical Air Navigation’ (TACAN)
which was designed in the USA as an aid to military aircraft. The VOR fulfills the bearing
requirements for civil aviation navigation; hence this component of the TACAN system is not
used to assist civil air operations. A combined VOR/TACAN installation is commonly
referred to as ‘VORTAC’. Where TACAN is not installed for military purposes then a DME,
manufactured to the same specifications as the DME portion of TACAN, is installed. This is
referred to as VOR/DME.
The DME will measure the distance in a straight line to the ground beacon (the slant range), not
the distance from a point on the ground vertically below the aircraft (ground range). The
difference is generally insignificant, except that when directly over a beacon when the distance
shown will be height above the beacon.
Operation
Aircraft use DME to determine their distance from a land-based transponder by sending and receiving
pulse pairs – two pulses of fixed duration and separation. The ground stations are typically co-located
with VORs. A typical DME ground transponder system for en-route or terminal navigation will have
a 1 kW peak pulse output on the assigned UHF channel.
A low-power DME can be co-located with an ILS glide slope antenna installation where it provides
an accurate distance to touchdown function, similar to that otherwise provided by ILS marker
beacons.
A VOR ground station sends out an omnidirectional master signal, and a highly directional second
signal is propagated by a phased antenna array and rotates clockwise in space 30 times a second. This
signal is timed so that its phase (compared to the master) varies as the secondary signal rotates, and
this phase difference is the same as the angular direction of the 'spinning' signal, (so that when the
signal is being sent 90 degrees clockwise from north, the signal is 90 degrees out of phase with the
master). By comparing the phase of the secondary signal with the master, the angle (bearing) to the
aircraft from the station can be determined. This bearing is then displayed in the cockpit of
the aircraft, and can be used to take a fix as in earlier ground-based radio direction finding (RDF)
systems. This line of position is called the "radial" from the VOR. The intersection of two radials
from different VOR stations on a chart gives the position of the aircraft. VOR stations are fairly short
range: the signals are useful for up to 200 miles.
VOR stations broadcast a VHF radio composite signal including the navigation signal, station's
identifier and voice, if so equipped. The navigation signal allows the airborne receiving equipment to
determine a bearing from the station to the aircraft (direction from the VOR station in relation to
Magnetic North). The station's identifier is typically a three-letter string in Morse code. The voice
signal, if used, is usually the station name, in-flight recorded advisories, or live flight service
broadcasts. At some locations, this voice signal is a continuous recorded broadcast of Hazardous
Inflight Weather Advisory Service or HIWAS.
Operation
VORs are assigned radio channels between 108.0 MHz and 117.95 MHz (with 50 kHz spacing); this
is in the Very High Frequency (VHF) range. The first 4 MHz is shared with the Instrument landing
system (ILS) band. To leave channels for ILS, in the range 108.0 to 111.95 MHz, the 100 kHz digit is
always even, so 108.00, 108.05, 108.20, 108.25, and so on are VOR frequencies but 108.10, 108.15,
108.30, 108.35 and so on, are reserved for ILS in the US. The VOR encodes azimuth (direction from
the station) as the phase relationship between a reference signal and a variable signal. The omni-
directional signal contains a modulated continuous wave (MCW) 7 wpm Morse code station
identifier, and usually contains anamplitude modulated (AM) voice channel. The conventional 30 Hz
reference signal isfrequency modulated (FM) on a 9,960 Hz subcarrier. The variable amplitude
modulated (AM) signal is conventionally derived from the lighthouse-like rotation of a directional
antenna array 30 times per second. Although older antennas were mechanically rotated, current
installations scan electronically to achieve an equivalent result with no moving parts. When the signal
is received in the aircraft, the two 30 Hz signals are detected and then compared to determine the
phase angle between them. The phase angle by which the AM signal lags the FM subcarrier signal is
equal to the direction from the station to the aircraft, in degrees from local magnetic north at the time
of installation, and is called the radial. The Magnetic Variation changes over time so the radial may
be a few degrees off from the present magnetic variation. VOR stations have to be flight inspected
and the azimuth is adjusted to account for magnetic variation.
This information is then fed over an analog or digital interface to one of four common types of
indicators:
RDF systems can be used with any radio source, although the size of the receiver antennas are a
function of thewavelength of the signal; very long wavelengths (low frequencies) require very large
antennas, and are generally used only on ground-based systems. These wavelengths are nevertheless
very useful for marine navigation as they can travel very long distances and "over the horizon", which
is valuable for ships when the line-of-sight may be only a few tens of kilometres. For aerial use,
where the horizon may extend to hundreds of kilometres, higher frequencies can be used, allowing the
use of much smaller antennas. An automatic direction finder, often tuned to commercial AM
radiobroadcasters, is a feature of almost all modern aircraft.
For the military, RDF systems are a key component of signals intelligence systems and
methodologies. The ability to locate the position of an enemy broadcaster has been invaluable
since World War I, and play a key role in World War II's Battle of the Atlantic. It is estimated that the
UK's advanced "huff-duff" systems were directly or indirectly responsible for 24% of all U-
Boats sunk during the war. Modern systems often used phased array antennas to allow rapid beam
forming for highly accurate results. These are generally integrated into a wider electronic
warfare suite.
Several distinct generations of RDF systems have been used over time, following the development of
new electronics. Early systems used mechanically rotated antennas that compared signal strengths in
different directions, and several electronic versions of the same concept followed. Modern systems
use the comparison of phase or doppler techniques which are generally simpler to automate.
Modern pseudo-Doppler direction finder systems consist of a number of small antennas fixed to a
circular card, with all of the processing occurring in software.
Early British radar sets were also referred to as RDF, which was a deception tactic. However, the
terminology was not inaccurate; the Chain Home systems used separate omni-directional broadcasters
and large RDF receivers to determine the location of the targets.
Operation
Radio Direction Finding works by comparing the signal strength of a directional antenna pointing in
different directions. At first, this system was used by land and marine-based radio operators, using a
simple rotatable loop antenna linked to a degree indicator. This system was later adopted for both
ships and aircraft, and was widely used in the 1930s and 1940s. On pre-World War II aircraft, RDF
antennas are easy to identify as the circular loops mounted above or below the fuselage. Later loop
antenna designs were enclosed in an aerodynamic, teardrop-shaped fairing. In ships and small boats,
RDF receivers first employed large metal loop antennas, similar to aircraft, but usually mounted atop
a portable battery-powered receiver.
In use, the RDF operator would first tune the receiver to the correct frequency, then manually turn the
loop, either listening or watching an S meter to determine the direction of the null (the direction at
which a given signal is weakest) of a long wave(LW) or medium wave (AM) broadcast beacon or
station (listening for the null is easier than listening for a peak signal, and normally produces a more
accurate result). This null was symmetrical, and thus identified both the correct degree heading
marked on the radio's compass rose as well as its 180-degree opposite. While this information
provided a baseline from the station to the ship or aircraft, the navigator still needed to know
beforehand if he was to the east or west of the station in order to avoid plotting a course 180-degrees
in the wrong direction. By taking bearings to two or more broadcast stations and plotting the
intersecting bearings, the navigator could locate the relative position of his ship or aircraft.
Later, RDF sets were equipped with rotatable ferrite loopstick antennas, which made the sets more
portable and less bulky. Some were later partially automated by means of a motorized antenna (ADF).
A key breakthrough was the introduction of a secondary vertical whip or 'sense' antenna that
substantiated the correct bearing and allowed the navigator to avoid plotting a bearing 180 degrees
opposite the actual heading. The U.S. Navy RDF model SE 995 which used a sense antenna was in
use during World War I. After World War II, there were many small and large firms making
direction finding equipment for mariners, including Apelco, Aqua Guide, Bendix, Gladding (and its
marine division, Pearce-Simpson), Ray Jefferson,Raytheon, and Sperry. By the 1960s, many of these
radios were actually made by Japanese electronics manufacturers, such as Panasonic, Fuji Onkyo,
and Koden Electronics Co., Ltd. In aircraft equipment, Bendix and Sperry-Rand were two of the
larger manufacturers of RDF radios and navigation instruments.
ADF Receiver : pilot can tune the station desired and to select the mode of operation. The signal is
received, amplified, and converted to audible voice or morse code transmission and powers the
bearing indicator.
Control Box (Digital Readout Type) : Most modern aircraft has this type of control in the cockpit .
In this equipment the frequency tuned is displayed as digital readout. ADF automatically determines
bearing to selected station and it on the RMI.
Antenna : The aircraft consist of two antennas. The two antennas are called LOOP antenna and
SENSE antenna. The ADF receives signals on both loop and sense antennas. The loop antenna in
common use today is a small flat antenna without moving parts. Within the antenna are several coils
spaced at various angles. The loop antenna sense the direction of the station by the strength of the
signal on each coil but cannot determine whether the bearing is TO or FROM the station. The sense
antenna provides this latter information.
Bearing Indicator : displays the bearing to station relative to the nose of the aircraft.
Relative Bearing is the angle formed by the line drawn through the center line of the aircraft and a line
drawn from the aircraft to the radio station.
Magnetic Bearing is the angle formed by a line drawn from aircraft to the radio station and a line
drawn from the aircraft to magnetic north (Bearing to station).
Omega could determine position to a precision of ±2.2 km (1.4 mi). Later radio navigation
systems were more accurate.
Previous systems
Taking a "fix" in any navigation system requires the determination of two measurements.
Typically these are taken in relation to fixed objects like prominent landmarks or the known
location of radio transmission towers. By measuring the angle to two such locations, the
position of the navigator can be determined. Alternately, one can measure the angle and
distance to a single object, or the distance to two objects.
The introduction of radio systems during the 20th century dramatically increased the
distances over which measurements could be taken. Such a system also demanded much
greater accuracies in the measurements – an error of one degree in angle might be acceptable
when taking a fix on a lighthouse a few miles away, but would be of limited use when used
on a radio station 300 miles (480 km) away. A variety of methods were developed to take
fixes with relatively small angle inaccuracies, but even these were generally useful only for
short-range systems.
The same electronics that made basic radio systems work introduced the possibility of
making very accurate time delay measurements. This enabled accurate measurement of the
delay between the transmission and reception of the signal. The delay measurement could be
used to determine the distance between the two transmitters. The problem knew when the
transmission was initiated. With radar, this was simple, as the transmitter and receiver were
usually at the same location. Measuring the delay between sending the signal and receiving
the echo allowed accurate range measurement.
For other uses, air navigation for instance, the receiver would have to know the precise time
the signal was transmitted. This was not generally possible using electronics of the day.
Instead, two stations were synchronized by using one of the two transmitted signals as the
trigger for the second signal. By comparing the measured delay between the two signals, and
comparing that with the known delay, the aircraft's position was revealed to lie along a
curved line in space. By making two such measurements against widely separated stations,
the resulting lines would overlap in two locations. These locations were normally far enough
apart to allow conventional navigation systems, like dead reckoning, to eliminate the
incorrect position solution.
Atomic clocks
Key to the operation of the hyperbolic system was the use of one transmitter to broadcast the
"master" signal, which was used by the "secondaries" as their trigger. This limits the
maximum range over which the system could operate. For very short ranges, tens of
kilometres, the trigger signal could be carried by wires. Over long distances, over-the-air
signalling was more practical, but all such systems had range limits of one sort or another.
Very long distance radio signaling is possible, using longwave techniques (low frequencies),
which enables a planet-wide hyperbolic system. However, at those ranges, radio signals do
not travel in straight lines, but reflect off various regions above the Earth known collectively
as the ionosphere. At medium frequencies, this appears to "bend" or refract the signal beyond
the horizon. At lower frequencies, VLF and ELF, the signal will reflect off the ionosphere
and ground, allowing the signal to travel great distances in multiple "hops". However, it is
very difficult to synchronize multiple stations using these signals, as they might be received
multiple times from different directions at the end of different hops.
The problem of synchronizing very distant stations was solved with the introduction of
the atomic clock in the 1950s, which became commercially available in portable form by the
1960s. Depending upon type, e.g. rubidium, cesium, hydrogen, the clocks had an accuracy on
the order of 1 part in 1010 to better than 1 part in 1012 or a drift of about 1 second in 30 million
years. This is more accurate than the timing system used by the master/secondary stations.
However, the United States Navy had a distinct need for just such a system, as they were in
the process of introducing the TRANSIT satellite navigation system. TRANSIT was designed
to allow measurements of location at any point on the planet, with enough accuracy to act as
a reference for an inertial navigation system (INS). Periodic fixes re-set the INS, which could
then be used for navigation over longer periods of time and distances.
TRANSIT had the distinct disadvantage that it generated two possible locations for any given
measurements. This is true for hyperbolic systems like Loran as well, but the distance
between the two locations is a function of the accuracy of the system, and in the case of
TRANSIT this was close enough together that other navigation systems would not provide
the accuracy needed to resolve which was correct. Loran offered enough accuracy to resolve
the fix, but did not have global scope of TRANSIT. This produced the need for a new system
with global coverage and accuracy on the order of a few kilometres. The combination of
TRANSIT and the new OMEGA produced a highly accurate global navigation system.
Omega was approved for development in 1968 with eight transmitters and the ability to
achieve a four mile (6 km) accuracy when fixing a position. Each Omega station transmitted
a sequence of three very low frequency (VLF) signals (10.2 kHz, 13.6 kHz, 11.333... kHz in
that order) plus a fourth frequency which was unique to each of the eight stations. The
duration of each pulse (ranging from 0.9 to 1.2 seconds, with 0.2 second blank intervals
between each pulse) differed in a fixed pattern, and repeated every ten seconds; the 10-
second pattern was common to all 8 stations and synchronized with the carrier phase angle,
which itself was synchronized with the local master atomic clock. The pulses within each 10-
second group were identified by the first 8 letters of the alphabet within Omega publications
of the time.
The envelope of the individual pulses could be used to establish a receiver's internal timing
within the 10-second pattern. However, it was the phase of the received signals within each
pulse that was used to determine the transit time from transmitter to receiver. Using
hyperbolic geometry and radionavigation principles, a position fix with an accuracy on the
order of 5–10 kilometres (3.1–6.2 mi) was realizable over the entire globe at any time of the
day. Omega employed hyperbolic radionavigation techniques and the chain operated in the
VLF portion of the spectrum between 10 to 14 kHz. Near the end of its service life of 26
years, Omega evolved into a system used primarily by the civil community. By receiving
signals from three stations, an Omega receiver could locate a position to within 4 nautical
miles (7.4 km) using the principle of phase comparison of signals. Omega stations used very
extensive antennas to transmit their extremely low frequencies. This is because wavelength is
inversely proportional to frequency (wavelength in meters = 299,792,458 / frequency in Hz),
and transmitter efficiency is severely degraded if the length of the antenna is shorter than 1/4
wavelength. They used grounded or insulated guyed masts with umbrella antennas, or wire-
spans across both valleys and fjords. Some Omega antennas were the tallest constructions on
the continent where they stood or still stand.
When six of the eight station chain became operational in 1971, day-to-day operations were
managed by the United States Coast Guardin partnership with Argentina, Norway, Liberia,
and France. The Japanese and Australian stations became operational several years later.
Coast Guard personnel operated two US stations: one in LaMoure, North Dakota and the
other in Kaneohe, Hawaii on the island of Oahu.
Due to the success of the Global Positioning System, the use of Omega declined during the
1990s, to a point where the cost of operating Omega could no longer be justified. Omega was
shut down permanently on 30 September 1997. Several of the towers were then soon
demolished. Some of the stations, such as the LaMoure station, are now used for submarine
communications.