96 Proc
96 Proc
THEME:
SPONSORED BY
VOLUME XXXII
1996
1996
INTERNATIONAL TELEMETERING CONFERENCE COMMITTEE
Not Shown:
HUGH F. PRUSS
1915-1996
Hugh F. Pruss
Hugh Pruss, engineer, founder, pioneer, professional, associate and friend passed
from this mortal existence, 4 June 1996.
Hugh was born in Banks, Oregon, 12 November 1915. He and his family relocated
to Berkeley, California in 1919. After some early technical training, he worked in San
Diego for the Consolidated-Vultee Corporation in the electrical systems section of their
B-24 production. The early 1950’s found him with Raymond Rosen Products in
Philadelphia where he began his engineering specialization in telemetry. He was appointed
Vice President and Chief Engineer of the Telemetering Corporation of America in the mid
50’s. In this capacity Hugh was a major contributor to the critical test instrumentation used
to develop and perfect many of the nations first guided missile weapons such as the
Corporal, Titan and others. He began work with Teledyne Corporation, in Newbury Park,
California in the early 1970’s, again as a telemetry systems development and production
engineer. He retired in 1985.
On 30 April 1964, Hugh, with Robert G. Brown, Robert C. Barto and Arnold E.
Bentz signed the Articles of Incorporation of the International Foundation for Telemetering
and is thus one of the founders of the IFT, which continues today as the sponsor of the
annual International Telemetering Conference and the Telemetering Standards
Coordination Committee. The first ITC, designated ITC/USA-65 by its founders and
organizers, was held about a year later, 18-22 May 1965, in Washington, D.C. Hugh
served as one of the organizers of many of the conferences and was Conference General
Chairman of ITC/USA-74. He was, of course, one of the original members of the IFT
Board of Directors. He served as a Director, Vice President, President and Director
Emeritus until his recent death--some 31 years of selfless, dedicated contribution to our
industry.
Hugh is survived by his wife Pat. They married in 1968, living in California until
Hugh’s retirement. They later lived on Kauai, Hawaiian Islands, for about nine years. In
1992 they relocated to their current home on Hilton Head Island, South Carolina. Hugh is
also survived by four sons, Edward, James, Larry and Timothy.
Other than telemetry, Hugh’s “great passion” was fishing. Thus, those of us who
claim an association with telemetry, join together, with great affection for our departed
associate and friend, in wishing him an eternity of pleasant, restful fishing. He has
earned it.
36th ANNUAL REPORT OF THE
TELEMETERING STANDARDS COORDINATION COMMITTEE
The TSCC, following its standard practice, held two meetings this past reporting period.
One was held concurrent with the International Telemetering Conference; the other was
held on April 30, 1996 at the Hilton Hotel, Pasadena, California, hosted by the Jet
Propulsion Laboratory in conjunction with the 1996 Plenary Workshop of the Consultative
Committee for Space Data Systems (CCSDS).
The TSCC, with its broad representation of members from government, aerospace industry
users and manufacturers of telemetering equipment, continues to serve as a forum for
discussion, review of telemetry standards, and dissemination of information regarding
standards for the telemetering community.
Notable membership activity this year included election of the first European member,
enabled by last year’s decision to strengthen and broaden the committee by increasing the
number of members to 16. A mutual relationship was established between the TSCC and
its corresponding European standards committee. Furthermore, the TSCC has moved into
the computer age in two areas: 1) all members are now accessible by e-mail, and 2) the
Committee has set up its own home page on the World Wide Web, available at:
http://tecnet1.jcte.jcs.mil:8000/TSCC/index.html
During the year, the committee conducted technical reviews of “pink sheets” for Test
Methods for Telemetry Systems (IRIG Standard 118, Volume II) and Test Procedures for
Telemetry Transmitters (IRIG Standard 118, Volume 5, Chapter 5). In addition, status
reports were presented on CCSDS activities and radio frequency encroachment issues.
Respectfully submitted,
M. L. MacMedan
Chairman, TSCC
A MESSAGE FROM THE PRESIDENT OF
THE INTERNATIONAL FOUNDATION FOR TELEMETERING
The ITC/USA/96 staff was focused on spreading the ITC message in the most efficient
and effective way possible. With the addition of our own Home Page on the World Wide
Web (www.telemetry.org), the reality of telemetry is reaching further around the world.
My thanks to Chuck Buchheit, who serves on both the ITC and IFT for establishing our
IFT/ITC Home Page. Now you may access current information on the conference and our
exhibitors whenever you desire. My special thanks for all the work provided by Jean
Buchheit producing another CD-ROM version of our proceedings that includes
proceedings from prior years. You may have also noticed some innovative changes in our
advertising program introduced by Scott Wood. Thanks to all of you for making effective
communication the cornerstone of this 1996 conference.
It is my honor to take over the International Foundation for Telemetering (IFT) this year
from Dr. Jim Means. The IFT Board and I wish to thank Jim for 6 years of service as IFT
President. He has been instrumental in adding value to our conference every year and has
encouraged two new universities to open their curriculums to telemetering. The IFT has
been busy this year maintaining itself as a non-profit organization by awarding more than
$240,000 to the three universities we support. The Board is also considering a new ITC
spring conference in a more easterly location that will focus on new telemetry applications.
More details will be provided as we firm up the program, and if you are in interested in
supporting this activity, please get in touch with Warren Price or myself.
Next year’s ITC will be our academia year in Las Vegas. Our theme will be “The Role of
Academia in Telemetering - A Future Built on a Rich Tradition.” ITC/USA/97 will
offer each of you added insight into how your ITC dollars are being used to further the
telemetering profession. Mr. Randy Dobbs from Computer Sciences Corporation (CSC)
and Dr. Steve Horan from New Mexico State University will serve as General and
Technical Chairs, respectively. We look forward to bringing you this important conference
and hope you find it as informative as ever!
FORWARD TO THE 1996 PROCEEDINGS
The ITC/USA '96 Conference Committee and the International Foundation for
Telemetering are pleased to present the Proceedings of the thirty-second annual
International Telemetering Conference. Last year we made CD ROM’s of the conference
proceedings available on a limited basis. This year both hard copy proceedings and CD
ROM’s are available. The papers in these publications are invaluable references for the
solution of real problems encountered in the field of Telemetering and will continue to be
so for years to come.
Over twenty-five technical sessions with over 100 papers have been arranged consistent
with our conference theme, “Telemetry -- Linking Reality to the Virtual World”. This
theme associates the measuring and transmitting of data with the burgeoning capabilities in
computer networking, communications and information transfer impacting our professional
as well as private lives. Visualization of reality, or truth, via the “as good as”, or virtual
world, is a timely association with the art and science of Telemetering. And, we are
continuing to expand our Telemetering horizons in the medical and automotive arenas.
Over ten papers are presented by representatives of five other countries testifying to the
international influence of the conference.
We are delighted to present prominent speakers for this conference from both customer
and contractor sides of the U. S. Army acquisition process. Lieutenant General Ronald V.
Hite, Military Deputy to the Secretary of the Army, will speak during the opening/awards
session. Mr. Ed Larson, Senior Vice President of Engineering and Advanced Programs,
Lockheed Martin Aeronutronic, will speak during the Keynote Luncheon. Both these
gentlemen will speak from extensive experience with the U. S. Army acquisition process.
The conference also provides other opportunity for professional progression through short
courses on Telemetering and related subjects. This year four short courses are offered
which will enrich the experienced and aspiring Telemetering professional. Over the years,
many of us have benefited professionally by the excellent presentation of new equipment
and techniques by our exhibitors as well as their sponsorship of the ITC planning staff.
The surplus revenue from the conference operations makes possible the continuation of
academic scholarships and advancement of the Telemetering community.
Lastly, we would like to thank the ITC/USA '96 committee for their commitment to this
outstanding conference. The staff continues to improve the quality of the conference year
after year. Our thanks, too, to the International Foundation for Telemetering Board of
Directors for their unwavering confidence, support and help. It is our hope that you are
enriched by your experience at this conference and the other attractions of San Diego.
Lieutenant General Ronald V. Hite, as Military Deputy to the Assistant Secretary of the
Army for Research, Development and Acquisition, is the senior military advisor to the
Army Acquisition Executive and the Army Chief of Staff on all research, development and
acquisition programs and related issues. He testifies as the principal military witness for
Research, Development and Acquisition appropriations with the Congress, supervises the
Program Executive Officer system, and serves as the Director, Army Acquisition Corps.
Prior to his Pentagon tour, General Hite had successive command tours as the
Commanding General, White Sands Missile Range, New Mexico; and the United States
Army Test and Evaluation Command, Aberdeen Proving Ground, Maryland.
General Hite’s awards and decorations include the Distinguished Service Medal, three
awards of the Meritorious Service Medal, Army Commendation Medal, Meritorious Unit
Citation, Expert Infantryman’s Badge, the Parachutist Badge, Ranger Tab, Army General
Staff Badge and various other decorations for foreign service.
General Hite and his wife Millie, have two daughters, Mia and Tifani, and one
granddaughter, Kyia.
KEYNOTE LUNCHEON SPEAKER
E. D. (Ed) LARSON
LOCKHEED MARTIN AERONUTRONIC
E.D. (Ed) Larson is Senior Vice President of Engineering and Advanced Programs,
Lockheed Martin Aeronutronic, Rancho Santa Margarita, CA. The Division is part of the
Lockheed Martin Missile Group which includes Lockheed Martin Aeronutronic and
Lockheed Martin Vought Systems. Mr. Larson was appointed Sr. Vice President in August
1991.
Before joining Lockheed Martin, Larson was with Hughes Aircraft Company’s
Electro-Optical and Data Systems Group in El Segundo, CA. During his ten years at
Hughes, he served as a project manager, major program manager, and Manager of
Advanced Programs. As a project and program manager, Larson was principally involved
with the development of electro-optical targeting and pilotage systems for helicopters. His
experience includes the capture and management of numerous foreign and domestic
helicopter EO programs. Larson’s responsibilities included the structuring and
management of a major joint venture company established by Hughes and Texas
Instruments for the pursuit of the Army’s Commanche Helicopter Program. In the role of
Manager, Advanced Programs, he was responsible for all airborne advanced EO programs.
Larson served for six years in the U.S. Army, including combat duty as a Scout
helicopter pilot and Unit Commander. His decorations include the Distinguished Flying
Cross. He is a 1968 graduate of the U.S. Military Academy, West Point, NY, and holds an
MS Degree in applied physics from North Dakota State University. His graduate work was
in the field of non-linear optics.
ITC '96 BLUE RIBBON PANEL
The basic purpose of the International Foundation for Telemetering is the promotion and
stimulation of technical growth in the field of telemetering and its allied arts and sciences.
All financial proceeds from the annual ITC are used to further this goal through education.
In addition to sponsoring formal programs and scholarships at several universities, the ITC
is pleased to offer a number of short courses during the conference. These courses are
described below.
BASIC CONCEPTS
Instructor: Norm Lantz, The Aerospace Corporation
Four 3-hour Sessions
This course is designed for the not so experienced engineer, technician, programmer or
manager. Topics covered include basic concepts associated with telemetering and the
signal flow from sensor to user display.
INTERMEDIATE CONCEPTS
Instructor: Jud Strock, Consultant
Four 3-hour Sessions
This course is designed for the somewhat experienced user. It includes a discussion of
technology covering the entire system including signal conditioner, encoder, radio link,
recorder, preprocessor, computer, workstations, and software. Specific topics include
1553, CCSDS Packet, the Rotary Head Recorder techniques, Open System Architectures,
and Range Communications.
ADVANCED CONCEPTS
Instructor: Dr. Frank Carden, Consultant
Three 3-hour Sessions
This course is designed for the more experienced user. Topics covered include all aspects
of the design of PCM/FM and PCM/PM telemetry systems and convolutional coding.
The basic purpose of the IFT is the promotion and stimulation of technical growth in
telemetering and its allied arts and sciences. This is accomplished through sponsorship of
technical forums, educational activities, and technical publications. The Foundation
endeavors to promote unity in the “Telemetering Community” it serves, as well as ethical
conduct and more effective effort among practicing professionals in the field.
All activities of the IFT are governed by a Board of Directors selected from industry,
science, and government. Board members are elected on the basis of their interest and
recognition in the technical or management aspects of the use or supply of telemetering
equipment and services. All are volunteers that serve with the support of their parent
companies or agencies and receive no financial reward of any nature from the IFT.
The IFT Board meets twice annually--once in conjunction with the annual ITC and, again,
approximately six months from the ITC. The Board functions as a senior executive body,
hearing committee and special assignment reports and reviewing, adjusting, and deriving
new policy as conditions dictate. A major Board function is that of fiscal management,
including the allocation of funds within the scope of the Foundation legal purposes.
The IFT sponsors the annual International Telemetering Conference (ITC). Each annual
ITC is initially provided working funds by the IFT. The ITC management, however, plans
and budgets to make each annual conference a self-sustaining financial success. This
includes returning the initial IFT subsidy as well as a modest profit--the source of funds for
IFT activities such as its education support program. The IFT Board of Directors also
sponsors the Telemetering Standards Coordinating Committee.
In addition, a notable educational support program is carried out by the IFT. The IFT has
sponsored numerous scholarships and fellowships in Telemetry related subjects at a
number of colleges and universities since 1971. Student participation in the ITC is
promoted by the solicitation of technical papers from students with a monetary award
given for best paper at each conference.
The Foundation maintains a master mailing list of personnel active in the field of telemetry
for its own purposes. This listing includes personnel from throughout the United States, as
well as from many other countries since international participation in IFT activities is
invited and encouraged. New names and addresses are readily included (or corrected) on
the IFT mailing list by writing to:
The annual International Telemetering Conference (ITC) is the primary forum through
which the purposes of the International Foundation for Telemetering are accomplished.
This conference generally follows an established format primarily including the
presentation of technical papers and the exhibition of equipment, techniques, services, and
advanced concepts provided for the most part by the manufacturer or the supplying
company. A workshop and/or tutorials may also be offered at the conference. To complete
a user-supplier relationship, each ITC often includes displays from major test and training
ranges and other government and industrial elements whose mission needs serve to guide
manufacturers to tomorrow’s products.
Each ITC is normally two and one half days in duration. A joint “opening session” of all
conferees is generally the initial event. A speaker prominent in government, industry,
education, or science sets the keynote for the conference. In addition to the Opening
Session Speaker, the opening session also hosts a supporting group of individuals who are
also very prominent in their respective fields who form a “Blue Ribbon Panel” to address
the conferees on a particular theme and are also available for questions from the audience.
The purpose of this discussion is to highlight and further communicate future concepts and
equipment needs to developers and suppliers. From that point, papers are presented in a
series of individual, concurrent Technical Sessions, which are organized to allow the
attendee to choose the topic of primary interest. The Technical Sessions are created and
conducted by voluntary Technical Session Chairmen.
Each annual ITC is organized and conducted by a General Chairman and a Technical
Program Chairman selected and appointed by the IFT Board of Directors. Both chairmen
are prominent in the organizations they represent (government, industry, or academic);
they are generally well-known and command technical and managerial respect. Both have
most likely served the previous year’s conference as Vice or Deputy Chairman. In this
way, continuity between conferences is achieved and the responsible individual can
proceed with his chairman duties with increased confidence. The chairmen are supported
by a standing Conference Committee of volunteers who are essential to conference
organizational effort. Both chairman, and for that matter, all who serve in the organization
and management of each annual ITC, do so without any form of salary of financial reward.
The organizational affiliate of each individual who serves not only agrees to the
commitment of his/her time to the ITC, but also assumes the obligation of that individual’s
ITC-related expenses. This, of course, is in recognition of the technical service rendered
by the conferences.
Those companies and agencies that exhibit at the ITC do so at the cost of the floorspace
rental fee. These exhibitors thus provide the major financial support for each conference.
Although the annual chairmen are credited for successful ITCs, the exhibitors also deserve
high praise for their faithful and generous support over the years.
A major feature of each annual ITC is the proceedings (including all technical papers) of
the conference. The proceedings are available in hard-bound book or on cd-rom at the
conference registration desk.
ABOUT THE
TELEMETERING STANDARDS COORDINATION COMMITTEE
The tasks of the TSCC include the determination of which standards are in existence and
published, the review of the technical adequacy of planned and existing standards, the
consideration of the need for new standards and revisions, and the coordination of the
derivation of new standards. In all of these tasks, the TSCC’s role is to assist the agencies
whose function it is to create, issue, and maintain the standards, and to assure that a
representative viewpoint of the telemetering community is involved in the standards
process.
The TSCC was organized in 1960, under the sponsorship of the National Telemetering
Conference. For 20 years, from 1967 to 1987, it was sponsored jointly by the Instrument
Society of America (ISA) but is currently under the sole sponsorship of the International
Foundation for Telemetering (IFT), sponsor of the ITC. Two meetings are held each year,
one of which is held concurrently with the ITC. The Annual Reports of the TSCC are
published in the ITC Proceedings.
The membership of the TSCC is limited to 14 full members, each of which has an
alternate. Membership on technical subcommittees of the TSCC is open to any person in
the industry, knowledgeable and willing to contribute to the committee’s work. The 14 full
members are drawn from government activities, user organizations, and equipment vendors
in approximately equal numbers. To further ensure a representative viewpoint, all official
recommendations of the TSCC must be approved by 10 of the 14 members.
Since its beginning, a prime activity of the TSCC has been the review of standards
promulgated by the Range Commanders’ Council (RCC)--primarily those of its Inter-
Range Instrumentation Group (IRIG) and, later, those of the Telemetry Group (TG). These
standards, used within the Department of Defense, have been the major forces influencing
the development of telemetry hardware and technology during the past 30 years. In this
association, the TSCC has made a significant contribution to RCC documents in the fields
of Radio Frequency (RF) telemetry, Time Division (TD) telemetry, Frequency Modulation
(FM) telemetry, tape recording, and standard test procedures.
As the use of telemetering has become more widespread, the TSCC has assisted
international standards organizations, predominately the Consultative Committee for Space
Data Systems (CCSDS). In this relationship, the TSCC has reviewed standards for
telemetry channel coding, packet telemetry, and telecommand.
TABLE OF CONTENTS
96-01-1 Telemetry Challenges for Ballistic Missile Testing in the Central Pacific
Jack H. Markwardt, Steve LaPoint
96-02-2 CAIS Ground Support Equipment using a Low Cost PC-Based Platform
Robert Knoebel, Albert Berdugo
96-03-4 Designing an Antenna/Pedestal for Tracking LEO and MEO Imaging Satellites
W. C. Turner
96-04-2 Design and Performance of Card Level Telemetry Receivers and Combiners
Douglas O’Cull
96-06-4 Compact Airborne Real Time Data Monitor System - Production Monitor
Grant H. Tolleth
96-07-1 Next Generation Antenna Controllers for NASA Dryden Flight Research
Center
Gaetan C. Richard, Ph. D., Laszlo Kiss
96-07-3 Low Cost, Highly Transportable, Telemetry Tracking System Featuring The
Augustin/Sullivan Distribution and Polarization, Frequency, and Space
Diversity
Peter Harwood, Christopher Wilson, Arthur Sullivan, Eugene Augustin
96-09-3 “Data Digitizing Unit” Eliminates the Need for Analog Recorders
Timothy B. Bougan
96-09-4 SAINT-EX Systeme d'Analyse INteractif de Trace et d' EXploitation A Test
Data Analysis Tool Based on FX+
Michel Pureur
96-12-6 Ultraviolet and Visible Imaging and Spectrographic Imaging (UVISI) Data
Processing Center (DPC)
James J. Eichert, James F. Carbary, Priscilla L. McKerracher, Lora L. Suther
96-13-4 The Challenge of Programmed Tracking Low Orbit Satellites from Mobile
Ground Stations
Dietrich Hoecht
96-16-1 National Guard Data Relay and The LAV Sensor System
June Defibaugh, Norman Anderson
96-16-2 The Family of Interoperable Range System Transceivers (FIRST)
Alan Cameron, Tony Cirineo, Karl Eggertsen
96-21-3 Desktop GPS Analyst Standardized GPS Data Processing and Analysis on a
Personal Computer
Dennis L. Hart, Johnny J. Pappas, John E. Lindegren
96-23-1 Flexible Intercom System Design for Telemetry Sites and Other Test
Environments
Timothy B. Bougan
96-23-2 Virtual Cables at the Nevada Test Site
N. S. Khalsa
96-24-3 Predicting Failures and Estimating Duration of Remaining Service Life from
Satellite Telemetry
Len Losik, Sheila Wahl, Lewis Owen
96-24-4 Getting the Telemetry Home: How Do You Get Data Back from Titan?
B. J. Mitchell
96-25-1 Doppler Extraction for a Demand Assignment Multiple Access Service for
NASA’s Space Network
Monica M. Sanchez
96-27-1 EUVE Telemetry Processing and Filtering for Autonomous Satellite Instrument
Monitoring
M. Eckert, C. Smith, F. Kronberg, F. Girouard, A. Hopkins, L. Wong,
P. Ringrose, B. Stroozas, R. F. Malina
ABSTRACT
The Ballistic Missile Defense Organization (BMDO) is developing new Theater Missile
Defense (TMD) and National Missile Defense (NMD) weapon systems to defend against
the expanding ballistic missile threat. In the arms control arena, theater ballistic missile
threats have been defined to include systems with reentry velocities up to five kilometers
per second and strategic ballistic missile threats have reentry velocities that exceed five
kilometers per second. The development and testing of TMD systems such as the Army
Theater High Altitude Area Defense (THAAD) and the Navy Area Theater Ballistic
Missile Defense (TBMD) Lower Tier, and NMD systems such as the Army
Exoatmospheric Kill Vehicle and the Army Ground-Based Radar, pose exceptional
challenges that stem from extreme acquisition range and high telemetry data transfer rates.
Potential Central Pacific range locations include U.S. Army Kwajalien Atoll/Kwajalein
Missile Range (USAKA/KMR) and the Pacific Missile Range Facility (PMRF) with target
launches from Vandenberg Air Force Base, Wake Island, Aur Atoll, Johnston Island, and,
possibly, an airborne platform. Safety considerations for remote target launches dictate
utilization of high-data-rate, on-board instrumentation; technical performance measurement
dictates transmission of focal plane array data; and operational requirements dictate
intercepts at exoatmospheric altitudes and long slant ranges. The high gain, high data rate,
telemetry acquisition requirements, coupled with loss of the upper S-band spectrum, may
require innovative approaches to minimize electronic noise, maximize telemetry system
gain, and fully utilize the limited S-band telemetry spectrum. The paper will address the
emerging requirements and will explore the telemetry design trade space.
KEYWORDS
The BMDO is developing a family of BMD weapon systems to defend against the existing
and expanding ballistic missile threat. The tactical ballistic missile threat includes systems
with range capabilities less than 5000 kilometers. Central Pacific testing for the PATRIOT
Advanced Capability 3 (PAC-3), Theater High Altitude Area Defense (THAAD), Navy
Area Theater Ballistic Missile Defense (TBMD) (Lower Tier), and NMD system require
development of improved telemetry acquisition, tracking, recording, and data processing
systems. The design requirements and trade space includes increased link margins, high
band width receivers, high data rate recorders, receiver cooling, data compression, binary
and M-ary phase shift keying, and acquisition of transportable, airborne, or additional
telemetry acquisition systems.
Projected Pacific BMD flight tests for include testing of the PAC-3, THAAD, Navy Area
TBMD (Lower Tier), TMD System Integration Tests, and NMD programs. The PAC-3
program will conduct a limited number of tests, including support for TMD System
Integration Tests, at USAKA/KMRKMR. During their Engineering and Manufacturing
Development (EMD) Phase, THAAD program plans to conduct the majority of their
testing at USAKA/KMR. The Navy plans to conduct Area TBMD (Lower Tier) testing in
the restricted areas northwest of the PMRF (on the island of Kauai, Hawaii). The NMD
Deployment Readiness Program will launch targets from Western Space and Missile
Center in California to USAKA/KMR and will launch interceptors from USAKA/KMR.
While the maximum intercept altitudes and slant ranges for BMD intercepts are sensitive
and may be classified, prudent telemetry design must consider antenna places at set-back
locations and plan for initial telemetry reception long before intercept. The designer should
expect initial telemetry reception to occur at or before the target vehicle reaches apogee.
Consequently, initial telemetry reception from targets for the Army PAC-3 and Navy Area
TBMD Lower Tier systems (the latter will utilize the Standard Missile 2 Block IVA
interceptor - SM 2 Blk IVA) may be required at slant ranges up to 400 kilometers. The
Army THAAD intercept zone will extend considerably further and higher. Telemetry
acquisition and tracking systems to support THAAD testing should be able to collect target
telemetry prior to target apogee (at altitudes near 400 kilometers) and at slant ranges up to
1500 kilometers. NMD target apogees will exceed 1000 kilometers, acquisition ranges of
2500 kilometers are anticipated, and the intercept may occur above 500 kilometers altitude
at slant ranges exceeding 1300 kilometers.
BMD PROGRAM INSTRUMENTATION REQUIREMENTS
Many TMD targets will carry two independent inertial measurement units (IMUs) to
provide cooperative time-space-position-information data to range safety systems and a
fiber-optic-mesh impact detection system called the Photonic Hit Indicator (PHI). The
IMU and PHI data will be relayed via S-band telemetry. Some TMD targets will also carry
a RF miss distance indicator (MDI) and a Global Positioning System (GPS) translator to
receive and relay GPS signals via S-band telemetry to ground-based translator processors.
The NMD targets will carry a GPS translator and may also carry the PHI and MDI. NMD
target vehicles will deploy reentry vehicles and may also deploy reentry vehicle replicas,
penaids, and other target decoys. Instrumentation for the replicas, penaids, and decoys
may include a instrumentation and S-band telemetry system called the Light Weight
Instrumentation System (LWIS).
The interceptor vehicles will transmit IMU and vehicle health/status (H&S) telemetry
signals. The Navy SM-2 Blk IVA, THAAD, and NMD interceptor will also transmit focal
plane array (FPA) data. The NMD interceptor will carry and the THAAD EMD
interceptor may carry a GPS translator.
Table 1 describes the anticipated target and interceptor instrumentation and provides
estimates for the associated S-band telemetry data rates and bandwidth requirements.
Table 2 describes the most stressing telemetry citation for potential BMD instrumentation:
Dual frequency (L1/L2 - Precision Code) translators with ground-based carrier cycle and
ambiquity processing have been proposed to provide interceptor-to-target relative range
with 2-3 centimeter accuracy. The higher focal plane array data rate has been cited as an
upper limit for SM-2 BLK IVA telemetry.
Carlos 9 meter telemetry tracking antenna, 43.6 dB gain, noise temp 350 K
Carlos 7 meter telemetry tracking antenna, 42.3 dB gain, noise temp 350 K
Roi-Namur 5.5 meter telemetry tracking antenna, 39.839.6 dB gain, noise temp 350 K
4 Carlos 3 meter telemetry tracking antennas, 34.6 dB gain, noise temp 400 K
Gagan 3 meter telemetry tracking antenna, 34.6 dB gain, noise temp 400 K
Roi-Namur 3 meter telemetry tracking antenna, 34.6 dB gain, noise temp 400 K
Notes: 1. The 8 Carlos analog recorders can be linked to 4 high density digital
formatters to record up to 32 MB/s (for 15 seconds)
2. Of the 4 analog recorders at Roi, 2 can be converted to High Density
Digital Recorders (HDDR) to record up to 11.7 Mbps.
PMRF: The most capable PMRF telemetry systems can reliably acquire telemetry signals
at TMD acquisition ranges. The existing receivers and recorders are relatively narrow-
band, lower data rate equipment: PMRF telemetry acquisition assets are summarized
below:
The telemetry loads of ballistic missile defense testing will tax the remaining S-band
telemetry spectrum. With sale of the upper S-band spectrum, S-band telemetry is now
limited to the spectrum between 2200 - 2290 MHz. Practical target and interceptor
transmitters are limited to 10 watts; imposing a transmit limit on telemetry design.
Atmospheric losses, albeit small, can be expected for long range telemetry acquisition.
Connector and cable losses must be anticipated at the transmitter and receiver. Free space
losses, FSL, are given by Eq 1:
One-on-One TMD Intercept Scenarios: Given the high gain telemetry reception
capabilities of USAKA/KMR and PMRF, no serious telemetry acquisition problems arise
for scenarios involving the launch of a single TMD target and interceptor. However, the
existing receiver equipment is bandwidth-limited and data record rates are insufficient.
Investments will be required to augment the existing receiver and recorder capabilities at
USAKA/KMR and PMRF.
NOTE: Requirements in italics exceed best PMRF capabilities and those in bold
print exceed the best USKAK/KMR capabilities.
NMD One-On-One Scenario Design Trades: Reliable FPA telemetry reception can not be
assured, from the fixed nine meter system. A mobile, 9 meter, telemetry acquisition
platform, stationed beneath the intercept point, would suffice. Receiver cooling for the
current fixed systems appears to be one solution to the NMD gain requirements. The
design trade space reveals that receivers with liquid nitrogen cooling (@ 177 K) could
easily provide acceptable gain margins for the long range NMD intercept scenario.
Data compression offers another alternative to address the high acquisition gain of the
NMD FPA data channels. A 256 x 256 pixel FPA, sampled at 40 Hz, with 4 bits per pixel
produces the cited 10 MB/s. Intelligent data compression would transmit only active
pixels. Compression that utilized a 32 bit pixel address (16 across by 16 down), sampling
at 40 Hz, with 4 data bits per pixel, could relay data on as many as 360 active pixels while
requiring no more telemetry gain than the IMU data channel (a 2 MB/s channel). Data
compression of GPS translator channels would be a more difficult proposition. Data
compression would shift the demand from range telemetry assets to the BMD interceptor
vehicle. An application specific integrated circuit (ASIC) would be required to provide
interceptor FPA data compression.
Error detection and correction offers a third alternative for NMD FPA data channels.
Current digital error detection and correction coding techniques adds error correction
coding to the transmitted message format at the expense of a slightly increased data
transmission rate. Insertion of automatic error detection and correction circuitry at the
receiver will increase the effective signal-to-noise ratio and will have the equivalent effect
as an additional acquisition gain of 3 to 6 dB, thereby reducing the total required
acqusition gain.
Multiple Shot TMD Intercept Scenarios: Congress has mandated multiple shot
engagements for TMD systems during their EMD phase. These scenarios would involve
more than two interceptors and more than two targets. The scenarios do not generate
overly demanding gain margins at TMD telemetry acquisition ranges; however, all existing
USAKA/KMR telemetry assets would be required to provide one-on-one tracking without
backup. Existing PMRF assets would not be able to provide one-on-one coverage for a
three-on-three scenario. Transportable, airborne, or additional range telemetry systems will
be required to backup coverage at USAKA/KMR and one-on-one coverage at PMRF.
TMD Multiple Shot Scenario Design Trades: TMD scenarios are not faced with the
telemetry link margins of concern for NMD testing, as telemetry acquisition will occur at
much reduced ranges. As a result, an additional trade space is available for TMD telemetry
acquisition. The pulse code modulation that is prevalent in S-band telemetry could be
replaced with binary or multiple (m-ary) phase shift keying. A shift to binary phase shift
keying, with the loss of 3 dB signal margin, would double the number of receivable
channels with the available S-band spectrum. Each doubling would incur another 3 dB
signal margin loss. The original signals would be recovered via matched filters inserted
between the existing antennas and receivers.
CONCLUSION
The BMDO, the U.S. Army, and the U.S. Navy will jointly fund the development of
telemetry systems to meet critical telemetry instrumentation requirements for testing
ballistic missile defenses in the Central Pacific. Ballistic missile defense testing demands
will spur procurement of high-gain telemetry systems with broad bandwidth receivers and
high data rate recording systems. The design solution space includes the following options:
Receiver Cooling
Data Compression
Automatic error detection and correction coding/receivers
ACKNOWLEDGMENTS
The authors would like to acknowledge the assistance from Messrs. Dave
Nekomoto and Jack McCreary. The views expressed in this article are those of the authors
and do not reflect the official policy or position of the Department of Defense or the U.S.
Government.
REFERENCES
U.S. Army Space and Strategic Defense Command, "Range Users Manual, Kwajalein
Missile Range, U.S. Army Kwajalein Atoll, May 1992.
NOMENCLATURE
Barbara Lam
(818)306-6321 [email protected]
ABSTRACT
This paper presents a new architecture of the end-to-end ground system to reduce overall
mission support costs. The present ground system of the Jet Propulsion Laboratory (JPL)
is costly to operate, maintain, deploy, reproduce, and document. In the present climate of
shrinking NASA budgets, this proposed architecture takes on added importance as it will
dramatically reduce all of the above costs. Currently, the ground support functions (i.e.,
receiver, tracking, ranging, telemetry, command, monitor and control) are distributed
among several subsystems that are housed in individual rack-mounted chassis. These
subsystems can be integrated into one portable laptop system using established
MultiChip Module (MCM) packaging technology. The large scale integration of
subsystems into a small portable system will greatly reduce operations, maintenance and
reproduction costs. Several of the subsystems can be implemented using Commercial Off-
The-Shelf (COTS) products further decreasing non-recurring engineering costs. The
inherent portability of the system will open up new ways for using the ground system at
the “point-of-use” site as opposed to maintaining several large centralized stations. This
eliminates the propagation delay of the data to the Principal Investigator (PI), enabling
the capture of data in real-time and performing multiple tasks concurrently from any
location in the world. Sample applications are to use the portable ground system in
remote areas or mobile vessels for real-time correlation of satellite data with earth-
bound instruments; thus, allowing near real-time feedback and control of scientific
instruments. This end-to-end portable ground system will undoubtedly create
opportunities for better scientific observation and data acquisition.
KEY WORDS
Receiver, tracking, ranging, telemetry, command, monitor and control, real time, multiple
tasks, MultiChip Module (MCM), portable laptop system, and PCMCIA.
INTRODUCTION
Presently, the end-to-end ground functions (i.e., receiver, tracking, ranging, telemetry,
command, monitor and control) of the Jet Propulsion Laboratory’s Deep Space Network
(DSN) are distributed among several subsystems that are housed in individual rack-
mounted chassis as shown in Figure 1. Many of the subsystems have high operational (i.e.,
labor) costs and maintenance overhead mainly due to the support of outdated technologies.
Some of the ground functions such as telemetry processing is duplicated by two
independent systems: the Deep Space Communication Complex (DSCC) [2] and the
Advanced Multi-Mission Operation System (AMMOS) [4]. This leads to unnecessary
duplication and hence higher costs.
Deep Space Communication Complex (DSCC) Advanced Multi-Mission Operations System (AMMOS)
Project Planetary
Antenna Database Data System
Control Receivers Telemetry
Microwave
Tracking
Antenna
Mechanics PI
Network
Scientist
Ranging Telemetry
Operation
Processing
University
Control External
Center System
Telemetry Interfaces
Command Simulation
+ =
High Operations, Maintenance, Deployment, Reproduction, and Documentation Costs
AMMOS has the telemetry processing, telemetry simulations, and external interfaces
integrated into a laptop. However, all of these functions are implemented in software; and
therefore, the maximum data rates are on the order of 300 K bits/sec [4] which is not high
enough to support most Earth Orbiter missions. AMMOS also lacks the other ground
functions (e.g., tracking, ranging, command, network operation control, project operation
control, and central processing) to make it an end-to-end ground system. The present
AMMOS system also does not have a Viterbi decoder. Due to the computational
complexity of the Viterbi decoding algorithm, software cannot be used to implement such
a function at the data rate requirements of many missions.
In order to reduce cost, JPL is presently automating the operation of the ground system
using software to reduce the number of operators through a project known as the Network
Control Project (NCP) [2]. However, the maintenance, deployment, reproduction, and
documentation costs have not been addressed and remain very high because it retains most
of the old proprietary subsystems at the DSCC.
The AMMOS concept can be expanded to incorporate more of the ground functions into
the laptop by taking advantage of current hardware technologies to increase the
performance of the laptop. This architectural concept will be discussed in the next section.
After which the key enabling off-the-shelf technologies will be reviewed. Finally,
conclusions and future work will be presented.
The proposed architecture of a portable ground system is described in this section. This
concept extends the present AMMOS architecture, which only performs telemetry
simulation and processing. The plan is to integrate other functions of the end-to-end
ground system that are presently not available in the AMMOS system into a portable
laptop. Hence, the new system will have the following capabilities: tracking, ranging,
command, monitor and control, central processing, network operation control, telemetry
simulation and processing, and project operation control in one compact system as shown
in Figure 2. Thus, the ground communication facility portion of the DSCC as shown in the
DSN ground system of today (Figure 1) will no longer be necessary. Also, the duplication
of the telemetry simulation and processing functions in the DSCC and AMMOS will be
eliminated.
In order to achieve the required data rates for Earth Orbiter missions and provide better
integrated support for Deep Space missions, the new system will be implemented in
hardware. To promote modularity, each of the subsystems' functions (i.e., telemetry,
tracking, ranging, command, and etc.) will be implemented on separate cards by taking
advantage MCM packaging technology to reduce the size of the integrated circuitry even
further. The cards will use the Personal Computer Memory Card International Association
(PCMCIA) standard to promote interoperability.
Dramatic Reduction in Operations, Maintenance, Deployment, Reproduction, and Documentation Costs
Elimination of duplicate functions of DSCC and AMMOS/ Simplification of Hardware and Software
Deep Space Communication Complex (DSCC) Advanced Multi-Mission Operations System (AMMOS)
(LAN or WAN) Project Planetary
Database Data System
Antenna Network
Control Receiver Storage Telemetry
JPL
(i.e. BVR) Telemetry Project Operation
DSCC or
Processing Control Center
any Central Processing Center
WWW
Monitor &
Transmitters
Control
Data Systems Operation
PI
Network Scientist
Microwave Operation
Tracking Control
University
Antenna
Center
Mechanics
Telemetry External
Simulation System
Interfaces
Ranging
Command
For Deep Space and High Earth Orbiter mission support, the future plan of the proposed
concept is to keep the antennas, antenna controllers, receivers, transmitters, microwave
antenna mechanics, and storage or buffers for the data at the DSCC as shown in Figure 2.
Thus, the DSCC will still be required for the support of Deep Space and High Earth
Orbiter missions. This allows the DSCC to concentrate on specializing the unique
functions of these missions. All of the other ground functions will be integrated into a
portable laptop. Due to the portability of the new system, the proposed ground stations can
be located anywhere in the world (i.e., DSCC, JPL, University, PI’s office, remote site, or
mobile vessel) as shown in Figure 3. New low-cost missions (e.g., Millennium and
Discovery) can take the portable ground station and hire several graduate students to
operate and maintain the portable system, thus, reducing cost even further.
For Low Earth Orbiter support, this portable ground system architecture can be further
extended to include a built-in receiver with proper shielding to avoid interference between
the analog and digital signals (Figure 3). Such a system can retrieve data from a small
antenna (i.e., less than 11 m). Thus, the complete end-to-end ground system is highly
portable. The need to have a permanent site for Low Earth Orbiter ground system support
will not be necessary in the future. Nevertheless, data can still be retrieved using the large
70 m or 34 m antennas located at the station which can then be stored into a local
database, and subsequently be accessed via wireless technology (Figure 3).
FOR symbols/sec or bits/sec
DEEP
SPACE
AND (LAN or WAN)
HIGH- PROPOSED PI
EARTH STATION GROUND SCIENTIST
ORBITER SYSTEM UNIVERSITY
MISSION
SUPPORT
MILLENNIUM MISSIONS
NETWORK SUPPORT DISCOVERY MISSION
WWW (PI, SCIENTIST, AND UNIVERSITY)
FOR COMMERCIALIZE
LOW-EARTH WIRELESS ENABLE THE PI TO PERFORM
ORBITER DATABASE MULTIPLE TASKS CONCURRENTLY
MISSION FROM ANY LOCATION VIA WIRELESS
SUPPORT GLOBAL COVERAGE
SMALL
ANTENNA
BUILT-IN RECEIVER
Figure 3 Overall Configuration for the Proposed End-to-End Ground Architecture for
Deep Space, High Earth Orbiter and Low Earth Orbiter Mission Support.
There are several advantages to the proposed architecture over and above creating a
unified ground system and reducing costs. These benefits include: multiple tasking,
closed-loop control, and arraying.
Such a system will allow the PI or scientist to perform multiple tasks concurrently from
any location in the world. For example, the PI or scientist can be performing a Seafloor
Geodesy experiment using the Global Positioning System (GPS) while tracking a
spacecraft and processing telemetry data of a particular mission at the same time. Thus,
cost is reduced even further since only one operator is necessary to perform multiple
functions.
Closed-loop control of the various ground functions has the advantage of enabling the PI
or scientist to process the telemetry data in real-time and correlate the data with earth-
bound instruments such as GPS and allow near real-time feedback and control of scientific
instruments on the spacecraft.
Many small “umbrella-like” antennas can be arrayed together to achieve the performance
of a large antenna (e.g., 70 m or 34 m). For instance, Very Long Baseline Interferometry
(VLBI) can be performed by the new portable system.
MCM provides the structure of repackaging two or more Integrated Circuits (ICs) together
into one chip. The integration of off-the-shelf ICs like Qualcomm’s Viterbi decoder,
Advanced Hardware Architecture’s Reed-Solomon decoder, and several Field
Programmable Gate Arrays (FPGA) to control the hardware into one MCM package
enclosed in a single card to perform the telemetry processing functions.
A new architecture for ground systems for space mission support is proposed. The
proposal expands on the present AMMOS architecture by moving key functions into
hardware. Industry standard interfaces will be used to promote interoperability using the
PCMCIA bus. Various subsystems can be combined into a small form factor to promote
portability and modularity by using MCM packaging. Aside from achieving the primary
goal of creating a unified ground system to lower costs, the proposed architecture will
open new vistas in the areas of multitasking, closed-loop command and control, and
arraying various antennas together.
In the present climate of shrinking NASA budgets, this proposed architecture of integrating
most of the ground functions into a portable laptop takes on added importance as it will
dramatically reduce all of the above costs by satisfying the following objectives:
2. Combine some of the common ground functions to simplify the software and
hardware.
6. Open new ways for using the ground system at the “point-of-use” site as
opposed to maintaining several large centralized stations.
9. Perform multiple tasks concurrently from any location in the world via
wireless technology.
10. Create opportunities for better scientific observation and data acquisition.
The present technology infrastructure will allow many PIs or scientists to work together
analyzing their data and findings concurrently in near real-time. Thus, the extracted
information from the different channels can be correlated together and used more
effectively than data from a single channel.
The inherent portability of the system will open up new ways for using the ground system
at the “point-of-use” site as opposed to maintaining several large centralized stations. This
eliminates the propagation delay of the data to the PI or scientist, enabling the capture of
data in real-time and performing multiple tasks concurrently from any location in the
world. Sample applications are to use the portable ground system in remote areas or
mobile vessels for real-time correlation of satellite data with earth-bound instruments; thus,
allowing near real-time feedback and control of scientific instruments. This end-to-end
portable ground system will undoubtedly create opportunities for better scientific
observation and data acquisition.
The ideas presented in this paper is only in a conceptual phase. A feasibility study of the
end-to-end ground system is necessary to develop the ideas further.
ACKNOWLEDGMENTS
The preliminary research described in this paper was funded by Robert J. Cesarone and
Richard C. Coffin through the Telecommunications and Mission Operations Directorate.
Fruitful technical discussions with Wallace S. Tai, Richard P. Mathison, Michael J.
Rodrigues, and John C. Peterson, George P. Textor, Joseph R. Kahr, and the late Ek Davis
are gratefully acknowledged.
REFERENCES
Ed Gladney
Lockheed Martin Telemetry & Instrumentation
ABSTRACT
NASA and Lockheed Martin Telemetry & Instrumentation have jointly developed a new
data acquisition system for the Space Shuttle program. The system incorporates new
technologies which will greatly reduce manpower requirements by automating many of the
functions necessary to prepare the data acquisition system for each shuttle launch. This
new system, the Automated Data Acquisition System (ADAS), is capable of configuring
itself for each measurement without operator intervention. The key components of the
ADAS are the Universal Signal Conditioning Amplifier (USCA), the Transducer
Electronic Data Sheet (TEDS), and the Data Acquisition System (DAS 450). The ADAS is
currently being delivered and installed at Kennedy Space Center. NASA and Telemetry &
Instrumentation are actively pursuing commercialization of the ADAS and its associated
products which will be available during 1996.
KEYWORDS
INTRODUCTION
Over the last few years, NASA has been challenged with dramatic budget cuts. Of primary
concern were the mounting maintenance costs of the Shuttle program. NASA management
thus called for innovative ideas that would reduce costs while retaining performance and
safety. The challenge was answered by NASA’s Transducers and Data Acquisition
Section engineers who developed the concept of a Universal Signal Conditioning
Amplifier (USCA) and its associated Transducer Electronic Data Sheet (TEDS). This idea
was expanded to become the Automated Data Acquisition System (ADAS).
The ADAS gives NASA shuttle flight test and safety engineers reliable and accurate real-
time data at greatly reduced maintenance costs. NASA studies put the savings at 20,000
man hours per year1. To understand how this remarkable savings is achieved, some history
is necessary.
BACKGROUND
While most of the system is fixed, changes and upgrades are always being made between
launches. This spawns many changes in the measurement system. In addition, normal
calibration and maintenance of sensors and transducers is performed on a daily basis. All
this contributes to a dynamic system which requires a lot of attention and maintenance. To
conceptualize the problem, one need only look at the launch platform with the shuttle
installed (see Figure 1). Transducers and sensors are distributed throughout, requiring long
cables for the multiplexing equipment housed at the bottom of the launch pad. Each
transducer requires specialized and matched signal conditioning. Subsequently, a great
deal of maintenance time is spent matching signal conditioners to transducers and tracing
down failures in the system. As might be expected, manually tracing problems and their
source is a very labor-intensive effort, which is precisely why NASA engineers focused
their efforts on reducing maintenance labor.
Careful analysis proved that many hours were spent calibrating transducers, certifying
signal conditioners, matching transducers to signal conditioners, and identifying and
replacing channels. NASA engineers thus developed the concept of a “universal” signal
conditioner that would handle all transducer types. Such a device would eliminate the
matching of signal conditioners to transducers. Also, if the channel and signal could
automatically be identified, the device would eliminate more costs.
Figure 1. The Automated Data Acquisition System (ADAS)
The challenge was to design a signal conditioner that could read the information in the
TEDS and automatically configure its excitation, gain, linearity, etc. to each type of
transducer. The resulting design became known as the Universal Signal Conditioning
Amplifier (USCA). It soon became clear, however, that in addition to being universally
adaptable to all types of transducers, the amplifier had to withstand the open salt air and
fluctuating temperatures at the launch pad. NASA engineers therefore developed a
ruggedized version of the USCA as shown in Figure 3.
A third required component of NASA’s new approach came in the form of an advanced
data acquisition system. The system was conceived to provide multiplexing, control and
setup, database management, archiving, display, and processing of information in real
time. As the approach unfolded, this new Data Acquisition System, the DAS 450, would
Figure 2A. TEDS Circuit Card
TEDS
Lockheed Martin Telemetry & Instrumentation has been working with NASA on the
development of the ADAS since 1994. The final version of the USCA, TEDS, and ADAS
will be delivered by Telemetry & Instrumentation during 1996. The development has been
jointly funded by NASA, Lockheed Martin Telemetry & Instrumentation, and the State of
Florida Technological Research and Development Authority as part of a NASA technology
transfer initiative. Telemetry & Instrumentation is currently designing a commercial
version of the ADAS in addition to supplying the ruggedized USCA and ADAS to NASA.
Telemetry & Instrumentation plans to make commercial versions of the products available
during 1996.
ADAS OVERVIEW
The main elements of the ADAS are shown in Figure 4. An explanation of each is given in
Table 1.
THE UNIVERSAL SIGNAL CONDITIONING AMPLIFIER
A number of new technologies have been incorporated into the Automated Data
Acquisition System, but of them all the star is the Universal Signal Conditioning Amplifier
(USCA). The USCA is capable of automatically programming itself to match any
transducer in the system without operator intervention. It can also be programmed
remotely through the ADAS. This key capability reduces setup and configuration time
from hours to seconds.
Measurement ID
Time of Arrival Data Processing Chain
Auto setup
based on Setup Override Data Archiving Chain
Transducer
Characterization Transducer Status Data Display Definitions
Calibration
Laboratory System Database
The USCA, shown in Figure 5, contains multiple programmable circuits which are selected
according to the contents of the Transducer Electronic Data Sheet (TEDS). It consists of:
• Multiple programmable gain amplifiers (0.25 to 2,000)
• Multiple programmable anti-alias filters
• Multiple programmable D/A converters for excitation
• A programmable 16-bit A/D converter
• A Digital Signal Processor for programmable filtering, linearization, and
decimation
• DC to DC converters for power and isolation
• Isolated outputs (analog and digital)
Table 1. Data Acquisition Elements of the ADAS
2
Switch TEDS
EXC+/– 2 Switch
EXC+/– 2
A/D DSP
Sense
Switch
Analog
Test Out
PGA PLPF PGA
PIC
High-Speed
Data to UIM
Switch DAC EXC+
CMD & Status
DAC Analog to UIM
Out
DAC EXC–
DAC
Analog
Test Out
Obviously, the TEDS is another key component of the ADAS. In addition to setup data,
the TEDS contains other information important to the system, including calibration dates,
serial numbers, measurement types, and any other information the user deems important.
Once the USCA detects the transducer and configures itself, it notifies the DAS 450. The
DAS 450 then reads the USCA for transducer information and measurement ID and passes
this information to the database in the application software. The application software in
turn updates the database with new parameter identifiers. These identifier tags are returned
to the DAS 450 chassis. The USCA is then enabled to read measurement data. This
operation is diagrammed in Figure 6.
Transducer with
TEDS 8
Digital
Data Analysis,
DAS 450 Processing,
USCA Data Chassis Display
1 9 & Display 9
2
3 4
5 9
7
1. Transducer with TEDS detected by
USCA
6
2. USCA configures based on TEDS data System ACA
Database
3. DAS 450 detects USCA change
RAID Disk
4. DAS 450 reads USCA for measurement Sun Workstation
number and transducer info
5. DAS 450 status reflects USCA's
presence to ACA
6. ACA Processing Tag IDs requested and
assigned to measurement number;
database updated
7. Tag IDs returned to DAS 450/UIM/Channel
8. High-speed data enabled
9. Identified high-speed digital data routed to
archival, processing, and data display
NASA and Lockheed Martin Telemetry & Instrumentation have developed an Automated
Data Acquisition System that reduces maintenance requirements on the Space Shuttle
launch complex at Kennedy Space Center. This solution was achieved through technology
improvements in signal conditioning which resulted in a self-configuring and universal
signal conditioner capable of automatically recognizing any type of transducer and of
programming itself to match individual transducer specifications. This technology is being
commercialized and will be available to the industry during 1996.
REFERENCES
1. Larson, William E., “Automated Data Acquisition System for Kennedy Space Center’s
Launch Complex 39,” 1995.
2. Palumbo, Rob and Payne, David, “Automatic Data Acquisition Setup and Control,”
ISA Symposium, San Diego, CA, May 1996.
3. Becker, Dean; Cecil, Jim; Halberg, Carl; Medelius, Pedro Dr., “The Universal Signal
Conditioning Amplifier,” ITC Proceedings, Volume XXX, pp. 626-633, 1994.
EASTERN RANGE TITAN IV/CENTAUR-TDRSS
OPERATIONAL COMPATIBILITY TESTING
Chris Bocchino
William Hamilton
ABSTRACT
The future of range operations in the area of expendable launch vehicle (ELV) support is
unquestionably headed in the direction of space-based rather than land- or air-based assets
for such functions as metric tracking or telemetry data collection. To this end, an effort
was recently completed by the Air Force’s Eastern Range (ER) to certify NASA’s
Tracking and Data Relay Satellite System (TDRSS) as a viable and operational asset to be
used for telemetry coverage during future Titan IV/Centaur launches. The test plan
developed to demonstrate this capability consisted of three parts: 1) a bit error rate test; 2)
a bit-by-bit compare of data recorded via conventional means vice the TDRSS network
while the vehicle was radiating in a fixed position from the pad; and 3) an in-flight
demonstration to ensure positive radio frequency (RF) link and usable data during critical
periods of telemetry collection. The subsequent approval by the Air Force of this approach
allows future launch vehicle contractors a relatively inexpensive and reliable means of
telemetry data collection even when launch trajectories are out of sight of land-based
assets or when land- or aircraft-based assets are not available for support.
KEY WORDS
Telemetry, RF link analysis, TDRSS, Range operations, Bit Error Rate (BER) analysis
INTRODUCTION
Traditional Eastern Range telemetry coverage has been provided by a network of ground
stations stretching from Florida to the mid-Atlantic, as well as several ground- and
air-based off-range assets. Advanced Range Instrumentation Aircraft (ARIA) are staged to
provide telemetry data collection when vehicle trajectories carry the target out of sight of
land-based telemetry antennas. Due to the rising cost of ARIA support and maintenance, in
addition to concerns as to limited availability due to weather conditions, staging bases,
restricted air space, and limited bandwidth for real-time data relay, several ELV customers
professed the desire to find a less expensive and less restrictive data collection source
while maintaining their current launch trajectories. In early 1994 a series of informal tests
were initiated by the 45th Range Squadron (45 RANS), the 5th Space Launch Squadron
(5 SLS), NASA, and Martin Marietta (now Lockheed Martin), at the ER’s Cape Canaveral
Air Station (CCAS), to test the electrical compatibility between Titan IV/Centaur upper
stage telemetry and NASA’s TDRS system. The results indicated that utilization of
TDRSS for the reception and retransmission of vehicle S-band signal, subsequent
decoding by the ground receiving station at White Sands, NM, transmission of the data
through the associated NASA communication network, and reception by the customer at
the ER was feasible and achievable. In October 1994 the ER technical contractor,
Computer Sciences Raytheon (CSR), began to develop a formal operational test plan
which would ensure that data received at the ER would meet all program requirements for
bit error quality, and that tracking functions provided by TDRSS would be adequate
enough to ensure reception of data during desired periods. Note that these periods were
well after Range Safety requirements had been fulfilled, so no Eastern Range tracking
accuracy requirements applied to this situation. The test plan was approved by the Air
Force/CSR/Range customer community in December 1994 and formal tests began shortly
thereafter.
The formal Titan IV/Centaur-TDRSS compatibility test plan consisted of three individual
tests: 1) a bit error rate test performed to examine potential degradation in signal-to-noise
ratio due to the complex data path associated with the TDRSS network; 2) a bit-for-bit and
word-for-word compare between Centaur data received via direct RF at Tel-4, the ER
primary telemetry station, and data received over Tel-4 circuits via the TDRSS network;
and 3) an in-flight test to demonstrate TDRSS tracking ability and link margin closure
during critical data collection periods of the flight. The ER customer was specifically
interested in TDRSS performance during second Centaur engine burn, as this was an event
which had traditionally been supported by multiple ARIA.
The implementation of the resultant test plan was performed by operations personnel on
the ER, White Sands Ground Terminal (WSGT), and Goddard Space Flight Center
(GSFC). For the purposes of this reading, only the real-time and post-test evaluations by
ER personnel are included, although it needs to be stressed that all members of the diverse
test and evaluation group contributed to the final ER readiness recommendation. Data
analysis was performed by representatives from CSR Operations Control and CSR
Evaluation and Analysis groups but all findings were discussed within the larger Titan
IV/Centaur-TDRSS working group and a consensus reached before any final conclusions
were drawn.
PROGRAM REQUIREMENTS
Titan IV/Centaur telemetry characteristics and requirements for the generation of ER data
items for post-mission evaluations are provided in the Program Requirements Document
(PRD). Titan IV/Centaur telemetry consists of a 128 kbps data stream which PCM/PM
modulates a 2272.5 MHz carrier. The data is convolutionally-encoded for forward error
correction and encrypted. The resultant data stream is 256 kilosymbols per second
transmitted via RF in an NRZ-M format. Telemetry station data requirements include the
recording of Centaur data from second Main Engine Start (MES-2) minus 650 seconds
through second Main Engine Cutoff (MECO-2) plus 80 seconds, with the period between
MES-2 minus 80 seconds through MECO-2 plus 35 seconds considered “mandatory.”
The data accuracy required for these recordings is a bit error rate (BER) of no greater than
10-6, i.e., one bit error for every one million bits received [1].
The purpose of the bit error rate test was twofold. First, potential degradation of signal-to-
noise ratio due to the increased path loss associated with the initial S-band transmission
from the vehicle to the receiving TDRS antenna and implementation losses at the receiving
ground station at White Sands needed to be characterized in order to verify that no
modifications to the Centaur transmitter were necessary. Secondly, it was of interest to
know if BER performance for TDRSS data would approach theoretical curves for 1/2
convolutional encoding. Both checks would ensure the reception of data on the ER which
met the criteria documented by the ER customer in the PRD. A Decom System Inc (DSI)
Model 7192 Link Analyzer at Tel-4 was used to generate a 128 kbps 2047 Bit Error Rate
(BER) data pattern. This data was convolutionally encoded at Tel-4 with an Aydin Model
335 Bit Synchronizer and then flowed to the Merritt Island Launch Area (MILA) complex
where it was modulated onto a 2272.5 MHz carrier and uplinked to a TDRSS satellite.
The data was subsequently downlinked to White Sands Ground Terminal (WSGT) and
transmitted via the NASA Communications (NASCOM) system back to MILA and Tel-4.
Data received at Tel-4 were monitored for bit errors using the link analyzer. The uplink
power was varied and corresponding BER, C/No (carrier-to-noise density ratio in IF
bandwidth), and Eb/No (energy per bit-to-noise density ratio in IF bandwidth) were
recorded.
The Effective Isotropic Radiated Power (EIRP) of a transmitting antenna is given by:
EIRP(dB) = Pt - Pl + Gt (1)
where Pt is the transmitter power, Pl is the cabling loss between the transmitter and the
antenna, and Gt is the gain of the transmitting antenna (all quantities in dB) [2]. The EIRP
measurement made at MILA during this test could therefore be modeled as the Centaur
transmit power at those times in the flight when the effective Centaur antenna gain as
viewed by TDRSS was 0 dB. From customer documentation, it was noted that nominal
transmitter output power was 13 W and RF losses between the transmitter and the Centaur
antennas were 2.9 dB [1]. Converting the transmitter power output to dBm (decibels
referenced to one milliwatt) and applying Eqn. 1, it was determined that nominal Centaur
EIRP was 38.2 dBm. The desired outcome of the test, then, was to observe “threshold”
(i.e., a BER 10-6) reached at a reported EIRP equal to or less than this value. This would
indicate that existing Centaur power output would be sufficient to ensure a transmission
which met or exceeded data requirements.
The theoretical required Eb/No for a BER of 10-6 using k=7, rate 1/2 convolutional
encoding and Viterbi soft-decision decoding is approximately 5.0 dB [3]. The BER
performance for such encoding is shown in Figures I-II. Any “shifting” of the curve to the
right of the theoretical could be interpreted as “unexplained” losses in the RF link. Note
that this would include WSGT implementation losses, which were not well understood at
the time.
The first attempt to run Test #1 occurred in late December 1994, but the results were
considered invalid due to the presence of RF interference near the carrier frequency. This
interference had been noted before during the initial compatibility testing and had degraded
the link by as much as 2.25 dB when combining left-hand and right-hand circularly
polarized signals.
The first valid Test #1 runs were made on 23 January 1995. The data recorded at this test
session is presented in Table I. The “error rate” is easily calculated using the formula:
where T is the duration of the recording interval in seconds and E is the number of bit
errors received. Prior to the onset of testing it was decided that a minimum of three
minutes of test at each power level would be required to provide a high confidence level in
the data. The rationale was to examine at least 10k+1 bits to verify the desired BER of 10-k;
at a rate of 128 kbps, it would therefore take approximately 78 seconds to analyze 107 bits.
Statistically, this would provide a confidence level of 90%. Further, it was noted that
increasing the observation time an additional 102 seconds would increase confidence by an
additional 5%; test team members were therefore assured that observed BERs would be
within 5% of actual BERs [4].
TABLE I.
POWER LEVEL VS. BER TEST, 1/23/95
DURATION GMT TIME GMT TIME POWER LEVEL C/No Eb/No Errors at BER at
(MIN.) FROM TO (dBm) (dB-Hz) (dB) Tel-4 Tel-4
1
- WSGT did not report any received bit errors
2
- BER questionable as time duration examined under 3 min.
The Test #1 scenario was repeated on 3 April 1995 and 4 May 1995. Due to the brevity of
these sessions as compared to the first session, the generated data are not included in this
report. The BER curve resulting from the 3 April test session was nearly identical to that
from the 23 January test session; thus, Figure I essentially depicts the data from both test
sessions. The 4 May test data are interesting in that a BER of 10-6 was reached at an EIRP
nearly 2 dBm lower than during the other two test sessions. The BER curve for this third
session is shown in Figure II. BER vs. Eb/No data reveal that threshold values were
reached on the 23 January and 3 April tests at an EIRP between 38-39 dBm. On the 4 May
test this was improved to approximately 37 dBm. Figures I and II depict bit error rates vs.
corresponding Eb/No values; the improvement in bit error probability can be clearly
discerned.
The threshold values recorded during Test #1 sessions were very close to the critical 38.2
dBm nominal Centaur EIRP value. In addition, adjustments were made by the ER
customer after Test #1 to several link parameters which further narrowed the margin. The
User agreed to relax Centaur recording requirements to a BER of 10-5, and increased its
estimate of nominal transmitter output to 15.5 W, but this 1.3 dB improvement in the link
analysis was offset by a new estimation of 5.2 dB for passive transmitter losses. The net
1.0 dB loss in the RF link meant that at best (4 May results) the threshold EIRP and the
real Centaur values were nearly identical. The test team was encouraged by the fact that
TDRSS BER vs. Eb/No values were within 1 dB of theoretical predictions during the 23
January and 3 April sessions and were comparable to tenths of a dB (typical for soft-
quantized Viterbi decoding [5]) during the 4 May test session.
The bit-for-bit data compare test would allow the Range to observe actual data threshold
limitations on the Centaur-TDRSS link. The configuration for Test #2 was very similar to
that of Test #1. The major difference was that 128 kbps data were provided by a live
launch vehicle rather than simulated. The Centaur upper stage for the Titan IV/Centaur
K23 mission was radiating for a Terminal Countdown Demonstration (TCD), which is an
ER checkout operation used to emulate an actual launch configuration. Tel-4 received data
directly from RF via a TAA-3C (33-foot) parabolic telemetry antenna and this was
recorded on digital tape. RF was also received by the TDRSS S-band Single Access (SSA)
antenna and transmitted via the TDRSS/WSGT/NASA network to Tel-4. This data stream
was likewise recorded. Due to the time delay difference in transmission time (1.008
seconds), the Universal Time Code (UTC) recorded on tape could not be used to time-
correlate the data. Therefore the Mission Elapsed Time (MET) or Inertial Navigation Unit
(INU) Time, a measurement embedded within the Centaur telemetry and incremented at a
.02 second rate, was used to synchronize the data streams post-test. Data were then
compared bit by bit and the differences were accumulated and printed.
Test #2 was run on 3 April 1995, but varying signal strengths at WSGT did not permit the
reception of valid data until nearly two and one-half hours into the testing period. An initial
concern had been the integrity of the Centaur-TDRSS link during this test, as pre-mission
estimates had shown that due to the particular orientation of the launch vehicle on the pad
the transmitting antenna gain as viewed by the TDRSS-East satellite would be on the order
of -2.5 dB. The corresponding link analysis (see Table II) had indicated a final Eb/No of -
2.9 dB. As a result, the inability to successfully receive valid data on the ER was not
entirely unexpected. At 1803 Z, WSGT attained receiver lock on the downlinked data and
this data were passed successfully to the ER. Seventeen minutes of data were successfully
recorded from both data streams. Strip charts at Tel-4 were run in real-time to record
decommutator status (in sync or out of sync) on the incoming data to identify periods when
it was expected that data would be invalid. Post-test data analysis showed that only one
such occurrence was indicated on the strip charts; however, corruption of data in the form
of 8-word (128-bit errors) was noted in the hard compares at four instances during other
times in the run.
TABLE II.
TEST #2 PRE-TEST RF LINK ANALYSIS
The test was re-run on 4 May and this time the data source was a playback of Centaur data
at Tel-4. This allowed MILA to manipulate the EIRP as was done in Test #1. 10 minutes
of mission tape data were transmitted to TDRSS at power levels of 38.5 dBm and 37.5
dBm followed by a 12-minute run of data transmitted at 36.5 dBm. Post-test printouts of
the data compare show no unexpected errors at 38.5 dBm; however, the other runs showed
a combination of 1-bit, 1-word, and 8-word errors. It was later determined that the 8-word
errors were due to a combination of marginal signal levels and inherent design factors of
the associated decryption equipment, and were not due to link degradation. The actual
number of bits in error during these intervals could not be determined since any bit error
resulted in resetting of the decryption device, in turn creating a block of errors.
The first target of opportunity for an actual in-flight test of the TDRSS S-band antennas
was the launch of the Titan IV/Centaur K23 vehicle on 14 May 1995. Antenna switching
algorithms on the Centaur vehicle had been modified to point the active Centaur telemetry
antenna toward TDRSS (zenith) rather than ground stations (nadir) during a period
between major mark events. Prior to the mission, ER representatives modeled predictions
of TDRSS support using the PFACES (Pre-Flight Automatic Calculation of Signal
Strength) program. This is a link analysis tool certified for production of pre-launch
Estimated Coverage Plans (ECPs). The basic RF link equation which is the cornerstone of
the ACES software is:
where
The dynamic elements in the above equation, given the pre-mission nominal trajectory and
the Centaur transmitting antenna pattern, are the instantaneous antenna gain and the slant
range. The former is determined by mapping the aspect angles of the Centaur relative to
the target TDRS against the Centaur antenna pattern. The latter is determined by the
position of the target TDRS relative to the vehicle trajectory. The resultant signal strengths
are those received at the TDRS SSA antenna. To carry this analysis further, the ACES
data were imported into a Microsoft Excel spreadsheet and subjected to further
manipulation to arrive at a final estimate of the available Eb/No. The periods of quality
data could thus be predicted prior to the actual flight.
During the K23 countdown, it was noted that approximately one hour prior to launch, ER
lock on the incoming Centaur data via TDRSS was lost. WSGT reported that received
signals had degraded and Eb/No values were now below a usable level. Interestingly,
reception of TDRSS data was recovered at T-Zero as soon as the vehicle began to clear
the launch area. These varying signal strengths had been observed during Test #2 as well.
From recordings made at Tel-4, it was determined that the TDRS-4 (East) satellite
successfully covered approximately 9700 seconds of the first 23000 seconds of the
mission. Actual periods of quality data exceeded ACES predictions by several dB
throughout the mission. Most disappointing was the fact that no valid telemetry data were
received at the ER during the optimized period (MECO-2 + 80 seconds through Indian
Ocean Station acquisition).
It was determined post-mission that this latter problem was the result of limited receiver
bandwidths at the Secondary TDRSS Ground Terminal (STGT), which was the supporting
station for the launch. The downlink frequency from TDRS to the ground terminal is
affected by the frequency stability of the signal received by TDRS from the vehicle. The
received frequency variation was initially limited to 700 Hz for acquisition of signal. The
frequency offsets observed during K23 were much greater in magnitude than this.
Subsequent firmware modifications allowed the ground receiver to acquire signal with a
variation up to 20 kHz; this new capability was formally tested on 13 June 1995 by
flowing data through the TDRS system, purposely offsetting the frequency, and observing
the ability of the STGT receiver to maintain signal reception.
The next target of opportunity for testing was the K19 Titan IV/Centaur launch which took
place on 10 July 1995. For this launch, both TDRS-4 (East) and TDRS-5 (West) satellites
were configured to supply telemetry data to the ER. During this operation, the Centaur
antennas were not optimized for TDRSS pointing as they had been during K23. Despite
this, data collection from both satellites was very favorable, with TDRS-5 able to
successfully support Centaur second burn. In all, nearly 4700 seconds of data of
intermittent quality or better were re-transmitted to the ER during the first 7150 seconds
of the mission. Prior to T-Zero, degradation in signal strength and Eb/No was again noted
at White Sands, and again the signals recovered shortly after launch.
After K19, the ER recommended formal closure to the operational testing; it was felt that
nominal performance during the mission proved that second burn coverage by TDRSS
was feasible with proper pre-mission planning even without optimized pointing to the
satellites. During both flights, actual performance agreed with or slightly exceeded pre-
flight estimates. It was felt that with predictions modeling actual performance so closely
that it would now be valid to commit TDRSS as an operational Range asset during future
operations.
CONCLUSIONS
In the opinion of ER personnel, all tests were completed successfully and indicated that
TDRSS was a viable asset to receive Centaur telemetry data during future Titan
IV/Centaur launches. Test #1 indicated that the existing transmitter/antenna system on the
Centaur could be utilized without enhancements, and that the performance of TDRSS and
its associated communications links was nominal and approached theoretical performance.
Test #2 supported the data threshold values determined during Test #1 and proved that no
unusual degradation of the telemetry data was occurring prior to reception on the ER. Test
#3, specifically the K19 flight, was the operational demonstration of TDRSS abilities to
track the vehicle in-flight and proved that RF link closure could be obtained during the
Centaur second burn period.
A common element throughout all the tests was that during static pad testing (Test #2 and
pre-launch during K19 and K23), the link appeared to be time-varying and the signal
strengths and Eb/No values recorded at White Sands would periodically degrade and then
return to usable levels. ER personnel were concerned with this phenomenon as it indicated
that an element in the RF link was somewhat unpredictable. Several theories were
advanced for this phenomenon, including atmospheric scintillation or variations in the
receiving SSA antenna figure of merit (gain over temperature), both of which could be
dependent on the time of day. Aerospace Corporation presented a hypothesis which
correlated periods of decreased signal strength with the diurnal movements of the satellites
in their orbital slots; due to their slightly eccentric and slightly inclined orbits, the TDRSS
satellites perform a “figure-eight” type motion in reference to a point on the Earth’s
surface. It was possible that multipath interference due to reflections around the launch
area would degrade received signals at the satellite and this degradation would vary
depending on the location of the satellite in its “figure-eight.” This hypothesis is worthy of
consideration as it succeeds in explaining why the degradation was only noted when the
vehicle was on the ground. Please refer to Reference 6 for the mathematics behind this
theory.
ACKNOWLEDGMENTS
The operational acceptance testing for TDRSS support of Titan IV/Centaur was
accomplished through the cooperation of a great number of parties from the ER User, Air
Force, and NASA tenant organizations. Principal players included Margery Bacon (NASA
network operations), David Wampler (Stanford Telecommunications TDRSS link
analyses), and Nelson Gomez (GTE ground terminal operations) for TDRSS scheduling
and support; Ed Kachmar (Program Support Manager) and Lt. Jeff Kuzma (5th Space
Launch Squadron) for ER range and launch operations; Marcella Martin, George Jenkins,
and Dave Robinson of Martin Marietta for support on the customer side of the operations;
Bruce Mau and Mike Handelman of Aerospace Corporation for technical analyses; and
Captain Brian Wilchusky of the Titan IV/Centaur System Program Offices (SPO), who
had the daunting task of coordinating the TDRSS working group’s efforts.
REFERENCES
[1] Department of the Air Force, Space Systems Division, Program Requirements
Document No. 4400, Revision 2, Titan IV/Centaur, 2 April 1992
[2] Griffin, Michael D. and James R. French, Space Vehicle Design, American Institute of
Aeronautics and Astronautics, Washington, DC, 1991
[3] Jacobs, I.M. “Practical Applications of Coding,” IEEE Transactions, Vol. IT-20, May
1974, pg 306
[5] Heller, Jerrold and I.M. Jacobs, “Viterbi Decoding for Satellite and Space
Communications,” IEEE Communications Technology, Vol. COM-19, October 1971, pp
835-848
Michael Norman
ABSTRACT.
This paper covers the development to date of the Telemetry Facilities at the
Defence Test & Evaluation Organisation located at Boscombe Down. The
practices adopted to meet the many varied requirements of trials customers and
some experiences gained in achieving successful implementation will be
addressed.
HISTORICAL BACKGROUND.
Over many years, telemetry systems have been used at Boscombe Down to
retrieve data and display quick look time histories from aircraft and aircraft
equipment trials using frequency multiplexing techniques. Time histories derived
from the transmitted measurands (e.g. velocity from acceleration) are produced
using analogue computing techniques.
Due to the nature of these trials a radio system is used as the data transfer
medium. Initially, in the 1960’s, an amplitude modulated system was in use in the
200Mhz Radio Frequency band with few data channels. During the 1970’s
frequency multiplexing systems came into use and in 1974/5 a change from
200Mhz band to 400Mhz band was effected altering the baseband spectral
occupancy of the system from 100Khz to 70Khz. (During that change the author of
this paper joined the Telemetry Group). The system remained with modest
development taking place using analogue computing and increasing the data
channels capability from 6 to 14 in number. Around 1981 the digitization of
analogue data was incorporated thereby increasing the capability of the system.
C system must provide real time data retrieval from an aircraft at any altitude
or attitude within it’s design limits. System range in excess of 80 miles is required
at altitudes above 5Kft for fixed wing aircraft and 30 miles at 1Kft for rotary wing
aircraft. Taking into account the normal radio constraints any radio signal dropout
must not exceed 0.1 ms duration.
C where the aircraft is likely to depart from its planned flight envelope
(e.g.student test pilot spinning sorties), provision must be made for the safety
monitoring officer to be able to advise recovery action using a half duplex radio
telephony system for communication purposes.
C Where the device under test is liable to destruction, (e.g. ejection seat
testing, air drop of equipment and other hazardous environments), the telemetry
system is to provide real time data retrieval and be constructed in a manner such
that it stands a good chance of survival. A mobile station is configured to retrieve
the data from the predicted trajectory of the drop or firing.
C The system design must incorporate capacity for expansion and uprating to
keep abreast of the technological advances and unforseen changes in
requirements.
In 1988 it was noted that the 400Mhz band for military use was exceptionally
congested. The Frequency Allocation Board then requested the transfer of
Boscombe Down telemetry operations to the 1400Mhz band. The physical location
of Boscombe Down close to a major UK military training area (Salisbury Plain)
emphasised the need to alter the Radio Frequency band of operation.
Pulse Code Modulation (PCM) data was to be embodied during this frequency
band change and enable the system to be used for data retrieval from other
transmission systems e.g. contractors aircraft, thus increasing the system
capability and flexibility.
An appraisal of the existing equipment was then carried out with a view to
ascertain the purchases required to meet the specifications outlined and provide
the additional features.
COST ESTIMATES & TIME SCALES.
Short term:-
C Move the radio frequency of operation from 400Mhz band to 1400Mhz band.
C Incorporate PCM data.
C Telemeter the crew speech.
Longer term:-
A dual beam 8ft monopulse (azimuth only) auto-tracking dish aerial system with
high gain beam of 6 deg azimuth and elevation with a low gain acquisition aid of
30 deg azimuth and 70 deg elevation beam widths respectively was selected as
the most likely to satisfy the requirement.
Equipment procurement activities were then carried out resulting in the last of
the Radio Frequency equipments being on site in late 1989, and the computer
system arrived in early 1993.
Aircraft
The sub-carrier system for the incorporation of the PCM in the data format was
chosen due to the reliability, simplicity and proved suitability for safety purposes of
the analogue proportional bandwidth system, (e.g. sorties lost due to defects in
the system over 20 years are less then 1%).
Tests were undertaken to assess the optimum equipment set-up required for
effective operation. Design for the telemetering of the intercom/crew speech was
also undertaken and the simplest method found was to use the sub-carrier of
channel 14 and alter the discriminator output filter bandwidth to suit. Tests on the
sub-carrier aspects of the system for PCM recovery showed that this was
sufficient for the low bit rates in use at present. Frequency Diversity operation to
negate the effects of wing blanking, etc., was considered to be essential coupled
with the possible reduction in the multipath effects during aircraft manouevres.
Surveys were then carried out on each aircraft (2 Hawk Jet Trainers & 2 Hunter
Jet Trainers; Lynx, Gazelle & Sea King Rotary Wing Aircraft; and a Tornado
Aircraft) regarding aerial sites, equipment locations, etc., followed by installation
and commissioning tests (including electro-magnetic compatibility).
Aerial systems used are an ‘in-house’ design developed over a long period and
constructed from co-axial cable (expendable & cheap). The modifications required
in changing from 400Mhz to 1400Mhz were minimal.
C Aerial delivery (Annex A refers) - the equipments are configured to suit each
trial, generally using the frequency multiplexing system due to its inherent
simplicity. Measurements required are normally from a mix of strain gauge
elements, accelerometers and events. The units are capable of standing shock
loads to 100g and one specialist unit remains operational after repeated water
immersion.
C Ejection seat testing (Annex B refers) - the equipment is similar to that used
on dropping trials to retrieve similar data but housed within a package that fits in
the chest cavity of a dummy man. The aerial is again of local manufacture from
co-axial cable mounted within a fibre glass skull cap and compensated for the
metal head mass. In the earlier trials twin seats were ejected to check separation
of seats and clearances from the aircraft.
The design was governed by the need to ensure that the availability and
serviceability of the existing 400Mhz analogue system was not compromised in
any way.
C Ground testing of the system was then carried out using data from tape
feeding an aircraft transmission equipment mounted in the mobile station moving
around the airfield in an attempt to identify any RF dead spots or other anomalies.
Attempts to obtain approximate dish Polar Diagrams were carried out using a horn
located on the Radio Trials Division Tower. A plot of the signal received
(Automatic Gain Control) as the dish was rotated through 360 deg is shown in
Annex C for both antennae.
C The first sortie was carried out on a Hawk aircraft, with data received out to
about 90 miles at 5Kft on 1400Mhz band (400Mhz band loss of data about 110
miles). Level flight from 60 miles to 10 miles at 40Kft failed to identify any holes in
the Polar Diagram. Annex D, the dish antenna elevation polar diagram from the
supplier refers. Slightly later a Helicopter installation using a single transmitter
system was tested, a range of 40 miles at 1Kft altitude being obtained.
An approximate polar diagram of the dish was obtained in elevation and
azimuth by leaving the dish in the stowed position and arranging for the helicopter
to fly over the dish on 90 deg tracks. As the aircraft speed and altitude were
known from the telemetry data, coupled with telemetered intercom an estimate of
the beam pattern cover was obtained (cross checked with the earlier azimuth PD
result). These results tied in well with the data provided by the antenna system
supplier.
As a result of the tests the facility was required at a more centralised location
and installed in a purpose modified building.
The technical aspects of the move were then undertaken with the staff working
at both sites in order to maintain a no break service. The original site remained
operational on the 400Mhz band.
After ground checks/calibrations etc. were carried out, several sorties were
monitored on a ride along basis to ensure that a sound, viable system was
available. Following this appraisal the equipment within the original building was
de-commissioned and incorporated within the new system to allow operation at
400Mhz and 1400Mhz as required.
In October 1992 a radio coverage survey in conjunction with the Radio Trials
Centre was arranged. A Rotary Wing Aircraft and a Fixed Wing Aircraft were used
and the results obtained for 15dB carrier/noise ratio are summarised at Tables 1
and 2. Typical aircraft polar diagrams for the aircraft used are at Annex D.
Shortly after the completion of the radio site survey the computer system
arrived, was installed, commissioned and in operational use within the given time
scale.
Table 1 - 15db > SYSTEM NOISE FLOOR
Dish Aerial
Bearing from ground station 5Kft 8Kft 12Kft 15Kft 30Kft
323 deg 93nm 112nm 132nm 146nm
315 deg 93nm 109nm 134nm 149nm
300 deg 94nm 110nm 136nm 142nm
288 deg 94nm 115nm 138nm 141nm
276 deg 94nm 118nm 140nm 154nm 202nm
264 deg 94nm 115nm 137nm 140nm 220nm
252 deg 92nm 118nm 130nm 150nm
240 deg 88nm 114nm 136nm 147nm
Dish Aerial
Bearing from ground station 1Kft 2Kft
276 deg 42.5nm 59.5nm
264 deg 43nm 59.5nm
252 deg 44nm 62.5nm
240 deg 42.5nm 61nm
228 deg 40.5nm
216 deg 41.5nm
During the early days of using strain gauged items it was evident that the
gauge and its wiring was picking up the 400Mhz radio transmission. This
produced false data on the output time history as the device under test gyrated
during deployment. As it was not practical to move the gauge from the vicinity of
the aerial, filtering was employed at the amplifier input. However this only provided
a partial solution. Connecting the body of the gauge assembly to the DC gauge
supply and extensive bonding by means of silver soldering each metal join was
required for an effective solution.
Equipments obtained from many sources have been used successfully for
Boscombe Down telemetry work for several years, on some occasions exceeding
100g. The packaging of such items is undertaken ‘in house’ with assistance from
on-site expertise.
In order to overcome the radio constraints due to geographic effects and range
limitations (particularly where ground to ground transmission via the mobile station
is ineffective) an airborne relay feasibility study was commenced in July 1993. An
air test was carried out in Dec 1993 to test the ‘weak link’ target aircraft signal
received in the relay aircraft. Test data recovery in the relay aircraft resulted in
usable data to approximately 32 nautical miles at 4Kft aircraft separation. Two
omni-directional aerials were used (one target aircraft transmitting { 10W nom}
and one receiving on the repeater aircraft). Results were encouraging to the
extent that an installation was designed for a light aircraft to relay all forms of
telemetered data. This is pending completion on the viability of a business case
under assessment at present.
CURRENT STATUS.
The short term objectives outlined, coupled with the move to a building
modified specifically for this purpose have been achieved. Video development is
‘on-going’ and has been used for on line head up displays and parachute
deployment trials. The range from fixed wing aircraft where colour pictures are
received is over 70 miles. Aerial pointing is very critical, it has been noted that
although colour requires carrier/noise ratio in region of 15dB, when the signals
become weak a useable black and white picture often results. As an additional
feature the system is capable of receiving in 1.4Ghz & 2.3Ghz telemetry bands
allowing separate data to be recovered simultaneously from each band. This
modification to the disk aerial system being carried out using ‘on-site’ resources.
C A mobile station has been used to relay data from areas where the fixed
station is masked, receiving on 1.4Ghz band and retransmitting to the fixed station
on the 2.3Ghz band.
C A Barge to Shore telemetry control system has been used on a sea range.
Gary A. Schumacher
Terametrix Systems International, Inc.
ABSTRACT
PC based instrumentation and telemetry processing systems are attractive because of their
ease of use, familiarity, and affordability. The evolution of PC computing power has
resulted in a telemetry processing system easily up to most tasks, even for control of and
processing of data from a very complex system such as the Common Airborne
Instrumentation System (CAIS) used on the new Lockheed-Martin F-22. A complete
system including decommutators, bit synchronizers, IRIG time code readers, simulators,
DACs, live video, and tape units for logging can be installed in a rackmount, desktop, or
even portable enclosure.
The PC/104 standard represents another step forward in the PC industry evolution towards
the goals of lower power consumption, smaller size, and greater capacity. The advent of
this standard and the availability of processors and peripherals in this form factor has made
possible the development of a new generation of portable low cost test equipment.
This paper will outline the advantages and applications offered by a full-function, stand-
alone, rugged, and portable instrumentation controller. Applications of this small (5.25"H x
8.0"W x 9.5"L) unit could include: flight line instrumentation check-out, onboard aircraft
data monitoring, automotive testing, small craft testing, helicopter testing, and just about
any other application where small-size, affordability, and capability are required.
KEY WORDS
INTRODUCTION
PC/104 modules are small (3.6" x 3.8") boards, designed to be stacked together and
interconnected through pin/socket connectors so that no backplane or card cage is
required. The PC/104 form factor was originally developed by Ampro Computers in
California during the late 1980’s. The specification was first published in 1992 and is now
maintained by the PC/104 Consortium. The PC/104 designation is derived from the fact
that the processors used are the same as those in standard IBM compatible personal
computers and that 104 pins are used to interconnect the modules.
The PC/104 Specification is based on the IEEE-P996 Specification which describes the
mechanical and electrical specifications for standard 8-bit PC and 16-bit PC/AT buses.
The PC/104 bus signals are identical in definition and function to these standard buses.
The PC/104 Specification provides the electrical and mechanical interface for a compact
version of the IEEE-P996 bus, optimized for the unique requirements of embedded system
applications.
As shown in Figure 1, the PC/104 modules are much smaller than standard ISA modules
and, when stacked, form a very compact system. Two module versions are specified, an
8-bit equivalent of the PC bus which utilizes the 64 pin J1 connector, and a 16-bit
equivalent of the PC/AT bus which uses J1 plus a 40 pin J2 connector. The absence of a
backplane allows the drive current requirements for most bus signals to be reduced to 4
milliamps, which reduces power consumption for most modules to around 1-2watts with
resulting reductions in heat dissipation.
The mounting holes at each board corner allow the stack assembly to be bolted together
with screws and spacers. The combination of the pin/socket electrical interconnects, the
screws and spacers, and the overall small size provides for a compact package with
excellent mechanical integrity. Reliability is enhanced over a standard PC bus due to the
reduced number of electrical connection points achieved by the absence of the backplane.
Following the publication of the PC/104 standard, an increasing variety of modules began
to appear. Over 150 suppliers now manufacture PC/104 products and it is possible to
configure a highly capable system for applications where the reduced size, weight and
power consumption of PC/104 can be used to advantage. Because of the small size and
rugged design, the embedded processors are finding application in a wide range of
industrial uses. Because of this, many of the PC/104 modules are available in extended
temperature ranges.
With further refinements to the mechanical design, the final unit emerged as a 5.25"H x
8.00"W x 10.5"L package. The package houses a 486DX/4 100 MHz processor board
with 16 Mb of RAM, IDE and floppy controllers, TFT display controller, and RS232/422
serial, parallel, and Ethernet I/O ports. A PC/104 card stack of up to six modules mounts
to the processor board. An 850 Mb IDE HDD and 1.44 Mb floppy disk as well as a slot
for Type I or Type II PCMCIA cards are accommodated. The user interface consists of a
6.5 inch color flat panel display capable of being viewed in direct sunlight and a 17 key
keypad with glidepoint mouse. The hinged keypad folds up over the display screen when
not in use. A universal power supply provides for power from either AC or DC sources.
Recessed, readily customized I/O connectors on the back and side panels as well as front
panel BNCs provide for flexible signal I/O. The unit is configured with a Windows 95
operating system and appropriate application software.
The principal objective sought in the design of this controller was to bring significant
computing power to an easily handled, easily used package which could be applied to
specific instrumentation problems. Limiting the size of the unit to achieve light weight and
ease of handling also limits the size of the user interface which can be provided. The 6.5
inch screen limits the amount of data which can be viewed within a single screen and the
small keypad and glidepoint mouse are well adapted for set-up and control from menus
and lists but are not convenient for entry of large quantities of alphanumeric input.
These constraints define the types of applications for which the unit is well suited.
Specifically, where the power of the processor can be utilized to diagnose or monitor large
quantities of data and select specific data or results as output. This fits the unit’s intended
purpose as a diagnostic test instrument for easy viewing of specific data which led to the
product name - QuikView.
The first application considered for QuikView was for flight line instrumentation and
avionics system check-out. As systems and vehicles continue to employ increasing
numbers of serial buses to transmit data between units, a typical instrumentation system
may output PCM, and interface multiple serial bus types including MIL-STD-1553,
ARINC, and others. Simple functional and operational tests or checks on system input and
output functions may require a rack of test equipment containing PCM decommutators, bus
analyzers, and logic analyzers. QuikView can be configured as a portable test instrument
for performing quick diagnostic and functional tests on systems containing one or more
serial data bus types for use in laboratory, flight line, and factory floor environments. It is
capable of selecting and monitoring PCM data and monitoring and simulating avionics bus
data. The unit may be used in place of complete bus analyzer systems to observe data
values and message contents during functional or operational testing and check-out.
To provide for the basic PCM processing, Terametrix reduced all of the basic telemetry
signal processing functions to a PC/104 board set. This resulted in a stack of three PC/104
cards - bit synchronizer, frame synchronizer, and data decommutator. In addition, a time
code reader card is available and (for users of CAIS) a CVSD Voice Reproduction card is
available. The IRIG PCM interface can accept serial NRZ data from 100 bps to 8 Mbps or
serial data and clock at up to 24 Mbps.
For quick checks on the input data, a frame data capture mode is provided which captures
the entire PCM data frame and performs limit checks against predefined expected values.
A limited set of real time displays for viewing behavior of data channels is provided. These
include scrolling alphanumeric, EU alphanumeric, and time history trace. The time history
data may be logged to disk for later analysis. In addition, a frame map of all channel values
may be viewed and scrolled.
If MIL-STD-1553 capability is desired a PC/104 1553 interface card may be added. In bus
monitor mode the user may select an individual command, status, or data word from a
message and capture the data to memory. Individual data words may be observed in real
time as a digital or engineering unit value or as a time history trace. The time history data
may be logged to disk for later analysis. A sequential buffer of message data may be
captured and the entire message or individual words within the message may be viewed. In
simulation mode the unit can function as a Bus Controller or a Remote Terminal. To
operate as a BC the user creates data buffers of command messages. A BC message can
then be selected for output and, if desired, can be looped on for repeated output. In RT
operation, messages can be stored in memory tables to simulate the response given by the
simulated RT to Bus Controller commands. Lists of BC and RT messages used for
simulation and monitoring may be imported from floppy disk or via the Ethernet port. The
messages and data values to be output or captured may be selected from these lists.
If ARINC 429/575 bus data must be monitored, another PC/104 card may be added. The
ARINC interface provides one transmit and one receive channel and may be selected to
operate at either 12.5 Kbps or 100 Kbps data rates. In receive mode, a label mask is
applied to the data to allow selection of only labels of interest. The value of individual data
words may be observed in real time as a digital or engineering unit value or as a time
history trace. A sequential buffer of data word values for selected labels may be captured
and viewed. The time history data may be logged to disk for later analysis. In transmit
mode the user can create message buffers which may be selected for one time output or
looped on for repeated output. In addition the user may select specific labels and data
values for repeated output at specified time intervals. Lists of labels and data values used
for simulation and monitoring may be imported and the labels to be output or captured may
be selected from these lists.
Application software for the RS232/422 port included on the processor board is designed
to provide for capture and simulation of message data in systems utilizing an RS232/422
serial bus for point to point data communications. It may be operated either as a transmitter
or a receiver. The user may define the message formats in terms of identifying headers,
embedded word count values, fixed word count values, word lengths, error check bits, and
message lengths. In monitor mode the user may select messages or specific data words
within the message for capture and display. The value of individual data words may be
observed in real time as a digital or engineering unit value or as a time history trace. The
time history data may be logged to disk for later analysis. A sequential buffer of message
data may be captured and the entire message or individual words within the message may
be viewed. In transmit mode the user can create message buffers which may be selected
for one time output or looped for repeated output.
The result is a portable test instrument equipped to monitor and display PCM, 1553,
ARINC, and RS232/422 bus data. All of the interface cards mount on the processor card
as a PC/104 module stack approximately 3.5 inches high.
PC/104 modules, each including eight channels of universal signal conditioning per
module, provide the necessary input interfacing for signals from analog sensors. Each
channel has 5 wires available for ±excitation, ±signal, and shield. Screw terminal blocks
are provided on the QuikView I/O panels for connection to sensors. Each channel may be
individually programmed to interface to a thermocouple, RTD, thermistor, strain gauge,
4-20 ma current loop, or voltage input device. A pulsed constant current voltage provides
excitation. Each module has a 16 bit analog to digital converter and can scan the data
channels at 110 channels per second.
Modules may be added to provide programmable counter-timer channels for frequency
conversion or totalization from pulse type sensors like flowmeters or tach sensors.
Additional modules providing parallel or discrete digital inputs or serial inputs may also
be added.
Data can be acquired and viewed in engineering units, in real time or playback, using three
basic display types. An all channel display provides the current value of each active
channel along with a hi/lo limit indicator. Channels can be presented in a scrolling
alphanumeric form with hi/lo limit indicators or as a time history trace with limit
indicators. Archiving the acquired data is done on the internal disk and can be transferred
via floppy disk or over an RS232 or Ethernet port.
The advent of wireless LAN’s have made available a wide variety of low cost radios
operating in bands at 900 MHz, 2.4 GHz, and 5.8 GHz which can be utilized for low cost
telemetry links in certain applications. These links utilize a variety of modulation
techniques from FSK to spread spectrum and many operate under FCC Part 15 low power
standards and do not require licensing. Communicating over these links is identical to
transmitting over wire using RS232 or Ethernet ports.
As an example, a typical low cost radio currently available operates under FCC Part 15 at
900 MHz and can provide a range of 2000 feet outdoors. A data rate of 64 KBaud can be
utilized with this radio which is quite adequate for many small data acquisition
applications. Its power consumption of only 35 ma at 3V makes it compatible with a
PC/104 based system.
A small telemetry system for remote data acquisition can be configured by using two
QuikView units and adding a remote access software package. In this configuration, either
QuikView could be operated remotely from the other. Alternately, a version of QuikView
can be provided without a keyboard or front panel and could be used simply as a slave
unit.
This system could be used in a variety of ways. For testing applications requiring limited
transmission ranges such as might be found in testing construction equipment, the slave
unit can be equipped with signal conditioning and transmit the acquired data to the master
unit for storage to disk and quick-look review. Another application would be in an
industrial data logging environment where the slave unit could be set-up and left at the data
collection point to log low rate data over a long time period. Periodically the unit could be
visited and interrogated to dump its data over the telemetry link.
CONCLUSION
The PC/104 standard and the PC/104 modules being brought to market now make it
possible to provide small, lightweight low power systems with a level of capability and
ease of programming not previously possible. This capability can be conveniently
packaged, taken to the field, and applied to problems of testing and test support. The
development of PC/104 based test instruments represents one more step in the continuing
drive to achieve better test results in shorter times at less cost.
ACKNOWLEDGMENTS
REFERENCES
ABSTRACT
The Common Airborne Instrumentation System (CAIS) was developed under the auspices
of the Department of Defense to promote standardization, commonality, and
interoperability among flight test instrumentation. The central characteristic of CAIS is a
common suite of equipment used across service boundaries and in many airframe and
weapon systems.
The CAIS system has many advanced capabilities which must be tested during ground
support and system test. There is a need for a common set of low cost, highly capable
ground support hardware and software tools to facilitate these tasks.
The ground support system should combine commonly available PC-based telemetry tools
with unique devices needed for CAIS applications (such as CAIS Bus Emulator, CAIS
Hardware Simulator, etc.). An integrated software suite is imperative to support this
equipment.
A CAIS Ground Support Unit (GSU) has been developed to promote these CAIS goals.
This paper presents the capabilities and features of a PC-based CAIS GSU, emphasizing
those features that are unique to CAIS. Hardware tools developed to provide CAIS Bus
Emulation and CAIS Hardware Simulation are also described.
KEY WORDS
Key Words: Common Airborne Instrumentation System (CAIS), Ground Support Unit
(GSU), PC Platform, Airborne System Controller (ASC), Data Acquisition Unit (DAU),
Pulse Code Modulation (PCM).
INTRODUCTION
The Department of Defense (DOD), Office of Test and Evaluation (OTE) of the U.S.
Government led the effort to develop the Common Airborne Instrumentation System. This
system is predicated upon designing and building a high speed advanced suite of data
acquisition equipment which will meet the majority of U.S. Navy, Airforce and Army test
programs. The CAIS system provides standardization, commonality, and interoperability
among flight test instrumentation.
Operation, test and maintenance of the CAIS system requires specialized hardware and
software tools. Traditionally, a checkout cart or van would be stocked with bit
synchronizers, decommutators, time code readers and generators, MIL-STD-1553
simulators, and possibly strip chart recorders to support the instrumentation system.
“True” system testing could only be done by a few specially trained engineers using
specially designed diagnostic tools. This causes a system to lack flexibility,
maintainability, and serviceability. The goal of the Ground Support Unit was to eliminate
an entire host of telemetry support equipment and provide simple-to-use system
diagnostics and support tools. This set the stage for a better approach using the low-cost,
open-architecture of the Personal Computer (PC).
DEVELOPMENT APPROACH
Hardware Platform
The CAIS GSU has been developed using the latest PC-based technology. The approach
integrates the latest computer platform with low-cost, commercially-available, PC-based
telemetry tools and software. The basic system is then supplemented with unique hardware
and software as required to support unique CAIS applications. The result is an architecture
which can be tailored to specific CAIS applications but avoids the high cost and long
schedules typically encountered with ground support equipment. In addition, the
architecture supports growth as PC-based technology improves.
Several PC-based cards were developed to support unique CAIS requirements. These
cards include such functions as CAIS Bus Emulator, CAIS Master Emulator, and CAIS
DAU Simulator, with the potential for additional cards as the need arises. ISA and PCI
compatibility are used in order to maintain the desired open architecture. These cards can
configured to meet many a wide variety of CAIS requirements.
Integrated Support Software
The GSU was developed for the F-22 Advanced Tactical Fighter. This is a new aircraft
which is required to be heavily instrumented in order to speed up the flight readiness
approval process. Aircraft will be instrumented with over 30 DAUs and 3,000 active
measurements which will require an enormous, high-speed PCM format. There are
multiple instrumented aircraft at a number of flight test facilities. Traditionally, this would
cause a lot of problems due to the database and software residing at only one facility. This
problem has been solved with mobile and highly flexible standalone platforms which allow
complete system support using a “user friendly” and a “quick to operate” system.
Every member of the F-22 team is able to test and trouble-shoot the CAIS system using
easy-to-use tools which allow the system to be tested and trouble-shot without being a
CAIS system “expert.” This makes the system more serviceable and maintainable while
accomplishing this task at a greatly reduced cost to the program.
A GSU configuration was developed to meet the specific needs of the F-22 fighter aircraft.
The system was developed using the concepts presented in this paper. A low-cost,
ruggedized PC chassis was configured with a combination of commercial data acquisition
boards, custom boards for the CAIS requirement, and a combination of commercial and
newly developed software. The F-22 GSU block diagram is shown in figure 1 and figure 2.
The F-22 CAIS GSU is based on the latest Pentium Processor technology and a suite of
commercial data acquisition and processing cards. A mobile, rack-mountable chassis
contains the processor, keyboard, mouse, Active Matrix Color Display, and internal
speaker (for voice playback). Other standard peripheral devices include DRAM memory,
hard disk and floppy disk storage, multiple serial/parallel communication ports, and a
printing port. The processor itself was sized with 11 spare ISA slots and 3 spare PCI slots
to insure future upgradeability.
One of the key add-on cards is the SBS-91001 Bit synchronizer, Decommutator, and Time
Code Reader. This bit synchronizer handles data rates up to 16 Mbps, differential or single
ended inputs, and has a fully programmable front-end which can acquire many types of
PCM input coding. The decommutator handles data rates up to 24 Mbps, is compatible
with IRIG-106-93 standards, and is fully programmable. The time code reader accepts
IRIG A, B, or G, “AC” inputs and is also fully programmable. Another key card is the
91352 DAC/CVSD 16-channel card. This card provides CVSD voice reproduction, outputs
in either bipolar or unipolar format, multiple output ranges, and is fully programmable. The
third key card is the EXC-1553PC/E3 MIL-STD-1553 Test Simulator. This card provides
BC/RT simulation, RT simulation, BC simulation, and remote monitoring capabilities.
Two special CAIS cards were added to the F-22 GSU to support direct interface with the
CAIS Airborne data acquisition system. The Synchronous Data Link Control (SDLC) card
provides the means of communicating with the CAIS Airborne System Controller (ASC).
The SDLC port is a differential, 125 kbps half duplex serial interface between the GSU
and ASC. This port is used to configure and control all data acquisition aspects of the
CAIS System.
The other special card added to the F-22 GSU is the CBE-850, CAIS Bus Emulator card.
The card supports four key operational roles for the CAIS airborne data acquisition
system:
The card is fully supported with an integrated, Windows-based software program. Each of
the four operational roles are now described in detail as they apply to the F-22 application.
CAIS BUS EMULATION
The main purpose of the CME is to emulate the CAIS Airborne System Controller (ASC).
The majority of the CAIS components are the Data Acquisition Units (DAU’s), which
require extensive tests for incoming inspection, lab test and on board aircraft test down to
the channel level. The ASC is a very high price item to be used for DAU channel testing.
The CME function of the CBE card is a low cost solution to test and verify some of the
CAIS components. In addition, the CME emulates the complete format structure of the
ASC, down to the OP-CODE of every single instruction. This makes it highly convenient
to develop formats for the ASC, and run them using the CME to test/debug the format,
without hassle of wiring up an ASC. In other words the CME function turns a PC into a
low cost multi-purpose system controller.
The CME supports programmable bit rate to 5 Mbps maximum, selectable bits per word,
PCM code, CAIS Bus, Format start address, Simultaneous Sample, and Mode Change bit.
Each CAIS Data Acquisition Units (DAU’s) contain EEPROM memory which must be
loaded for various channel setup and configuration information. The DAU’s do not have
any industry standard bus (i.e. RS232) for programming their setup. The DAU can be
programmed in one of Two ways, through the ASC, or through the CBE function.
Programming a DAU through the ASC has several drawback, namely it requires an ASC,
and the loading is done through a relatively low speed bus at 125 Kbaud. The CBE allows
programming of any DAU directly through the CAIS bus (without an ASC) at
programmable rates up to 5 Mbps. The CBE functions use a Dual Port RAM (DPR)
directly mapped to the PC memory using the 16 bit bus. The DPR has 1K x 16 for CAIS
Command and 1K x 16 for CAIS Reply. An internal 10 millisecond “time out” counter is
provided to simplify software when programming EEPROM in the Page Mode. The CBE
can also be used to program DAU’s through the ASC, at the CAIS bus high rate using the
AS’s RS-422 repeater port.
The CBE supports programmable bit rate to 5 Mbps maximum, selectable bits per word,
Burst/continuous mode, CAIS Bus, start/end address, and Interrupt Enable.
CAIS DAU Simulator (CDS)
During system test if a DAU does not reply to commands received by a system controller,
or does not program and verify, the problem can be attributed to the controller, the wiring,
or in the selected DAU. The CDS can be used as a tool to allow the user to troubleshoot
any DAU in the system by emulating the programming, verification, and proper operation
of the DAUs and the ASC in the system. In addition, to verify proper operation of the
ASC, one would require to have several DAU’s. The CDS eases system problem
identification by operating as a general purpose DAU with a programmable DAU address
ID. It can be programmed to operate as the only DAU on the CAIS bus, or as one of many
DAU’s on the bus.
The CDS has several unique feature to allow verification of several modes of operation of
the CAIS bus. The CDS includes an EEPROM which can be interrogated by the ASC or a
program developer to program and verify a DAU. It includes 2K x 16 Dual Port RAM
(DPR), directly mapped to the PC memory bus. Data written by the PC into the DPR is
retrievable under format control by the CAIS Controller. The CDS turns the PC into an
active DAU on the CAIS bus.
PCM Acquisition
The PCM acquisition function of the CBE card allows the acquisition of an external PCM
source, or the local PCM output of the CME function. The PCM function is uniquely
tailored to acquire and filter data compatible with IRIG chapter 8. Aydin Vector’s ALBUS
and the CAIS AVDAU are two known systems that the PCM acquisition can acquire and
filter their 1553 data. Most of the data filtering and selection is done on the hardware level.
In the case of IRIG chapter 8, one can filter the fill words, and select 1553 data based on
its Bus ID. Further filtering is done through software to select down to the 1553 message
level and the word level. This function is very useful in testing the AVDAU unit. This
quick-look validation of 1553 data eliminates the need to have to make a test tape and
have flight test data reduction validate whether the system is operating properly.
Data is selectable from the PCM based on an onboard RAM tag. In addition include a 12
bit DAC output, and a parallel PCM output with tag bits generated by the RAM tag. The
PCM requires NRZ-L, Bit Clock, and Frame Clock. It operates up to 16Mbps rate, and
has programmable Internal/External data source.
CONCLUSION
All major future U.S. Navy, Airforce and Army test programs will be required to use the
CAIS Airborne Data System. To operate, test and maintain this system, unique CAIS
specific test equipment is essential to allow efficient use and operation of the system. A
PC-based platform utilizing low cost, commercially available equipment makes the ideal
platform. Special cards which are unique to the CAIS architecture provide direct access to
the inner workings of the CAIS system. An integrated ground support system with
multiple, low-cost cards and software provides an extremely powerful method to setup,
support, and maintain the CAIS instrumentation system. A lightweight, rack-mounted
package allows multiple pieces of support equipment to be integrated into a stand-alone,
user-friendly hardware and software package. A Windows-based, multi-tasking software
package yields an extremely efficient solution to the challenges faced by today’s ground
support engineer.
The CAIS GSU combines advanced Commercial Off-The-Shelf (COTS) equipment with
specialty CAIS equipment to provide a complete CAIS ground support unit.
1. The SBS-9100 is a product from Terametrix Systems International, Inc., Las Cruces, N.M.
2. The 9135 DAC/CVSD is a product from Terametrix Systems International, Inc., Las Cruces, N.M.
ABSTRACT
The front-end system is a gateway that accepts multiple telemetry streams and outputs
time-tagged frame or packet data over a network to workstations in a distributed satellite
control and analysis system. The system also includes a command gateway that accepts
input from a command processor and outputs serial commands to the uplink. The front-
end can be controlled locally or remotely via the network using Simple Network
Management Protocol. Key elements of the front-end system are the Avtec
MONARCH-E PCI-based CCSDS/TDM Telemetry Processor/Simulator board, a
network-based, distributed computing architecture, and the Windows NT operating
system.
The PC-based telemetry and command gateway is useful throughout the lifecycle of a
satellite system. During development, integration, and test, the front-end system can be
used as a test tool in a distributed test environment. During operations, the system is
installed at remote ground stations, with network links back to operations center(s) for
telemetry and command processing and analysis.
KEY WORDS
This paper describes an innovative, PC-based satellite data acquisition and command
front-end which takes advantage of the state-of-the-art in networking, computing, and
software technology. The telemetry front-end is a gateway that accepts multiple telemetry
streams and outputs time-tagged frame or packet data over a network to workstations in a
distributed telemetry analysis system. The system also includes a command gateway that
accepts input from a networked command processor and outputs to the uplink. Key
elements of the front-end system are:
Satellite Control Systems require a front-end component which performs real-time data
acquisition and commanding. These front-end systems have historically been built on
complex, proprietary bus and computing architectures in order to provide real-time
decommutation and parameter processing. These closed architectures eliminate the ability
of these systems to “ride the wave” of continual increases in computing power and make
them difficult to program. As PC-based front-end systems began to evolve, however, they
were limited by outdated bus technology (ISA) and network technology (Ethernet) which
severely limit throughput. With the recent development of the Peripheral Component
Interconnect (PCI) bus and development of telemetry products on that bus, we are now
capable of processing multiple high-speed data streams on the PC. Another enabling
technology is the availability of mainstream, multitasking operating systems for PCs that
are processor independent. This allows us to take advantage of advances in processor
speeds even after the system is developed.
With the reduced cost of high speed (greater than 100 Mbps) networking, the higher speed
of the PCI bus, increases in processor power, and advances in software technology, it is
now possible to create a distributed satellite control system based on COTS technology.
In the front-end system described, application specific hardware is limited to the PCI-
based CCSDS/TDM Telemetry Processor/Simulator board(s), a bit synchronizer (if
required), and IRIG time card. Command processing, decommutation, and parameter
processing are performed in software by workstations which are connected to the front-end
by a high speed network. The system can be controlled locally or remotely via the network
using SNMP. The use of standard network protocols (TCP/IP) and inter-process
communication (IPC) mechanisms (sockets and RPC) facilitate integration with existing
telemetry analysis and command processing systems.
SYSTEM ARCHITECTURE
The front-end system is based on the architecture of a standard PC. A block diagram of
the front-end processor is shown in Figure 1. The PC contains PCI and ISA I/O busses as
well as the standard PC peripherals and I/O ports. The application specific interfaces
include a PCI Frame Synchronizer/Telemetry Simulator with Reed-Solomon
Encoder/Decoder card for each telemetry stream, and a Time Code Processor. All of the
hardware elements in the front-end system are available commercial off the shelf (COTS).
The front-end system acquires the telemetry data streams using PCI-based Frame
Synchronizer/PCM Simulator cards. A block diagram of the PCI card is shown in Figure 2.
Each PCI board receives PCM serial data and clock from a bit synchronizer and outputs
frame data to the PCI bus. The card performs frame synchronization using an adaptive
strategy, serial-to-parallel conversion, and Reed-Solomon decoding at current rates of 25
Mbps (100 Mbps and higher rates will be available in the near future.) Frame data with
time-stamp and quality annotation is transferred at 132 MB/sec into the PC memory for
further processing. This board is more like a network interface card than traditional frame
synchronizer / decom cards in that it provides a stream of frame data to the PC rather than
individual raw measurements and tags. Because of this flexible stream architecture, the
card can support both time-division multiplexed (TDM) framed telemetry as well as
packetized telemetry.
ENABLING TECHNOLOGIES
PCs currently support several processor alternatives in addition to the Intel x86 family,
including the Power PC, DEC Alpha, and MIPS processors. Many PC architectures also
support symmetric multiprocessing. This provides the flexibility to select a CPU subsystem
for the front-end PC which offers the best performance and price for a particular
application.
CPU CPU Local Bus
Host/PCI
Cache/ Clock Data Clock Data Clock Data
Bridge
Memory Bus
Main Memory
•••
PCI Bus
SCSI Host
Bus LAN ISA Graphics
Adapter Adapter Bridge Adapter
Ethernet
or ATM
LAN
S
C Video
S Frame
I Buffer
Disk
B
U
S
ISA Expansion Bus
Tape
CD
ROM
STD
PC Time Card
I/O
IRIG or
GPS
The Peripheral Component Interconnect (PCI) bus has been universally accepted as a high
speed I/O bus for both PCs and workstations. The 32-bit, 33 MHz PCI bus provides
transfer rates up to 132 MB/sec, and future 64-bit, 66 MHz PCI implementations will
provide transfer rates up to 528 MB/sec. PCI is a multi-master bus with an arbitration
scheme that is designed to set deterministic limits on bus latency. The high bandwidth and
limited bus latency make PCI an ideal choice for the multi-channel front-end where several
bus masters can be contending for the bus. PCI also offers interrupt sharing and auto-
configuration which alleviate many of the resource conflicts that are present in ISA bus
systems.
The overwhelming popularity of PCI has resulted in a wide range of low-cost, high-
performance peripheral adapters for video, networking, and data storage. The front-end
system can include network adapters for Ethernet, Fast Ethernet (100 Mbps), or
Asynchronous Transfer Mode (ATM) depending on the requirements of a particular
application. The front-end system uses a SCSI interface to control the local data storage
Serial Input Channel
Correlator
Reed-
F Solomon F RD
I I Receiver
Error
F F FPGA RT
Correction
O Chip O
PCI
Interface
Serial Output Logic
F Reed- F SD
I Solomon I Transmitter
F F FPGA
Encoder TT
O O
Frequency
Serial Output Channel Synthesizer
media. The front-end can support logging of multiple telemetry streams at very high rates
by assigning one storage device to each stream. This eliminates the “thrashing” of the disk
that can occur when logging multiple streams to one disk.
The front-end system software uses the Windows NT operating system. Windows NT
supports multitasking with multiple execution threads, symmetric multiprocessing, and
standard IPC mechanisms such as sockets and Distributed Computing Environment
(DCE). Concurrent processing of multiple telemetry streams requires an operating system
that supports fixed priority execution threads. Support for standard IPC mechanisms is
required for seamless communication between tasks on the front-end and tasks on other
platforms. Windows NT also provides many COTS applications for software development,
data-base programming, data analysis, and data display.
TELEMETRY GATEWAY
The front-end system can process both TDM framed telemetry as well as CCSDS
packetized telemetry. For each TDM telemetry stream, the front-end performs minor frame
synchronization, time-stamping, and major frame synchronization. For each CCSDS
packetized telemetry stream, the front-end performs frame synchronization, error detection
and correction, time-stamping, virtual channel sorting, and packet extraction. The front-end
also logs time tagged frame data to disk and multicasts time tagged frame or packet data to
the network.
In the front-end system, frame synchronization, error control, and time tagging are
performed in hardware. The remaining functions such as decommutation and parameter
processing are performed in software by the front-end PC and by other workstations in the
system. This approach differs from traditional front-end systems which perform
decommutation in hardware and parameter processing using proprietary embedded
processors. Processing the telemetry streams in software using a distributed, open
architecture provides the ability to support multiple high rate streams and the flexibility to
support both TDM framed telemetry and packetized telemetry.
The telemetry gateway can distribute entire frames or portion of frames to workstations on
the network. This selective forwarding capability is useful when the front-end is connected
to the processing workstations via a wide-area network connection which may not have
enough bandwidth to transmit the entire telemetry stream in real-time. The remaining frame
data can be transferred from the front-end disk archive to the processing workstations after
the satellite contact is complete.
COMMAND GATEWAY
The front-end system acts as a command gateway in addition to its telemetry gateway
functions. The command gateway receives commands from a networked command
processor. The commands are then packetized and encoded as necessary, then sent to the
uplink hardware for transmission to the satellite. The command gateway supports real-
time transfer and store and forward modes of operation. In real time transfer mode, the
front-end system outputs serial commands to the uplink as they are received from the
network. In store and forward mode, the front-end system accepts command script files
from the network. The front-end system outputs serial commands to the uplink based on
the transmit time in the command script file.
The front-end system can be used to support testing as well as telemetry and commanding.
PCM simulation can be used for system self-test and pre-pass checkout. In addition, the
CCSDS/TDM Telemetry Processor/Simulator boards can be software/firmware configured
to support bit error rate testing and data quality monitoring for more rigorous testing.
SYSTEM APPLICATIONS
The PC-based telemetry and command gateway is useful throughout the lifecycle of a
satellite system. During development, integration, and test, the front-end system can be
used as a test tool in a distributed test environment. During operations, the system is
installed at remote ground stations, with network links back to operations center(s) for
telemetry and command processing and analysis.
The use of standard network protocols and IPC mechanisms simplifies the integration of
the front-end system with existing satellite control software and with COTS analysis and
display packages. As long as the analysis and display software can receive data from a
network, it can be used with the front-end system. This facilitates the use of open-ended.
Remote PC Remote PC
FEP FEP
Disk Disk
Archive Archive
ATM/Frame Relay
WAN
Analysis and Analysis and
Display Display
Workstation Workstation
Network Network
Hub Hub
development tools for analysis and display such as LabVIEW, Visual Basic, and
RTWorks. The modular software architecture also decouples the graphical user
interface from the front-end system. The GUI for setup and control can run as a task on the
front-end system or as a task on a remote system.
CONCLUSIONS
A fully networked, remotely operated telemetry and command front-end system provides
flexibility and cost savings to satellite control systems. The front-end system can be
located at a remote ground station, with the satellite operations and telemetry monitoring
and analysis personnel working from the satellite Operations Center. With recent advances
in computing, software, and networking technology, this capability can now be developed
as a PC-based solution. A PCI-based telemetry and command processor provides a solid
foundation for such a system, offering current data rates of 25 Mbps per channel, and
future rates upwards of 100 Mbps. These advances are used to create a PC-based, fully
networked telemetry and command front-end gateway system.
MONARCH-E is a trademark of Avtec Systems, Inc. Windows and Visual Basic are trademarks of
Microsoft Corporation, LabVIEW is a trademark of National Instruments, and RTWorks is a trademark
of Talarian.
Group Telemetry Analysis
Using the
World Wide Web
Jeffrey R. Kalibjian
Lawrence Livermore National Laboratory
Keywords
secure data sharing, world wide web, hypertext transfer protocol, dual
asymmetric key cryptography
Abstract
Today it is not uncommon to have large contractor teams involved in the design and
deployment of even small satellite systems. The larger (and more geographically remote)
the team members, the more difficult it becomes to efficiently manage the disbursement of
telemetry data for evaluation and analysis. Further complications are introduced if some of
the telemetry data is sensitive. An application is described which can facilitate telemetry
data sharing utilizing the National Information Infrastructure (Internet).
Introduction
The World Wide Web (WWW) has transformed the Internet from a research aid into a
multi-media display case. The WWW is based on the Hypertext Transfer Protocol
(HTTP)-----an object oriented stateless protocol which can be used to build distributed
systems in which data representation can be negotiated. Secure HTTP (S-HTTP) is a
security enhanced version of HTTP that supports application level cryptographic
enhancements (e.g public key cryptography). Although more commonly used for
multi-media applications, the World Wide Web offers great promise as a “group-ware” or
general data sharing mechanism. Secure HTTP has made it possible to design and
implement Web based applications which can accomplish secure data sharing among a
group of business or research partners.
After reviewing HTTP, Internet security and the World Wide Web, this paper discusses
the design and implementation of a secure Web based data sharing tool. Some design
trade-offs impacting data analysis will also be explored.
HTTP
The Hypertext Transfer Protocol [1] is a very simple communication protocol. A client
makes a TCP/IP (Transmission Control Protocol/Internet Protocol, the underlying
communication protocol used on the Internet) connection to the server. The server will
accept the connection upon which the client will make a document request. The connection
domain and the document requested are contained in a Universal Resource Locator (URL),
the server responds to the request, the client collects the response, and finally, the server
terminates the connection to the client. The server then continues to listen for other
requests. A key element of the interaction is that the server will treat each subsequent
request as brand new; that is, it maintains no state. This is contrasted with other Internet
protocols (e.g. File Transfer Protocol, FTP) which do maintain state. Thus, HTTP
interaction amounts to a connection, request, response, and closure.
A request is basically an action (known more formally as a method) that can be applied to
the entity (an object identified by a Universal Resource Identifier, URI) requested. The
request also identifies the protocol version in use. More common methods include GET (
retrieve data identified by the URI), and POST (creates a new object linked to the object
specified).
A response consists of a status line which contains of a protocol version, status code and
its associated text phrase. It is on this line that the server confirms the client request to
“speak” in the requested protocol (HTTP). Optional header fields follow including date,
and originating location. The message section is next in which the Multipurpose Internet
Mail Extension (MIME) [2] content type of the returned data is indicated (typically
text/HTML), as well as the number of characters in the message, followed by a blank line,
and then the message itself.
Typically, HTTP servers and clients return messages making use of the Hyper Text
Markup Language (HTML). HTML [3] can be used to represent formatted text documents,
tables, forms, in-line graphics, and hypertext (linked) information. It forms the basis for
much of the information obtainable from the World Wide Web. Because of the flexibility
of the HTTP protocol, web server and clients capabilities are in a continuous state of
evolution delivering ever more increasing power. An example of this is JAVA [4]. JAVA
is an object oriented programming language developed by Sun Microsystems. Small
programs written in JAVA can be embedded in web pages, so that when the web page is
accessed, the program is downloaded to the client and executed. Such small programs are
called applets
Key elements of the request/reply paradigm, then, are the concept of negotiated protocol
spoken, as well as utilization of MIME headers to specify message content types.
WWW servers may communicate information about objects it receives or manipulates via
the Common Gateway Interface specification (CGI) [5]. The server and a CGI program
communicate via command line arguments or environment variables. The CGI programs
themselves can be written in many programming (e.g. C) or scripting (e.g. Perl) languages.
The analogy of such a capability on the client is known as a helper application. This is a
stand alone program that is activated by the client browser on detection of a specified
MIME type. Data received by the client is forwarded to the helper application for
processing. Since the helper application is a stand alone program, it cannot use the web
browser window to report or display results of its actions. Instead, it must manage such
capabilities on its own.
Security
When transporting sensitive information over public networks, one must generally have
three capabilities present to insure the information will not being comprised. First, there
must be assurance that the information being transported can only be read by the intended
recipient (privacy). The second notion is that of authentication. The recipient must be able
to guarantee that the person he receives data from is truly, "that person." Finally, there
must be a guarantee that the contents of the message have not been altered in the
message's travel from sender to recipient-----that is, one must have confidence in the
message integrity.
Dual asymmetric key cryptography [6] can facilitate these security capabilities. Two keys
are generated which have the unique feature that information encrypted with one key, can
only be decrypted using the other. Encryption is the process of “disguising” clear text so it
cannot be understood. One key is kept private, the other public. If Person A wishes to send
a private message to Person B, he may encrypt the message using Person B's public key.
Person A is assured that only Person B's private key can decrypt the message. In order for
Person B to be assured that the message he is receiving is from Person A, Person A may
sign the encrypted message by using his private key. Person B can be assured that only
Person A's public key could decrypt the signature. Person B may be assured the message
he received was not tampered with, if Person A calculates a checksum of the message he
wishes to send before encrypting the message. If the checksum is passed along in the
encrypted (and possibly signed) message, Person B can calculate a checksum on the
decrypted (and possibly authenticated) file by calculating his own checksum and
comparing it with the checksum sent in the message.
At this point is should become clear that both private and public keys need to be protected.
The security for the private key is obvious, one desires that only they alone may read their
own private messages. This is usually implemented by password protecting utilization of
the private key on the host system it resides on. Security is needed for the public key so
one can guarantee that no other key may be substituted for their own. This is provided by
having a certifying authority sign the public key (forming what is known as an X.509 [7]
certificate). The signature indicates that the name associated with the public key (in the
certificate) is indeed the “that person.” The certifying authority may require many forms
of identification before signing the certificate (e.g. birth certificate, social security number,
etc).
SSL, S-HTTP
These cryptographic principles are utilized in two specifications which have given the
World Wide Web security; namely, Secure Sockets Layer (SSL) [8] and the Secure Hyper
Text Transfer Protocol (S-HTTP) [9]. SSL is designed to run under the protocol being
used for application communication. Thus, while SSL is most commonly used to support
secure communication under HTTP, it can also be used to effect secure communication
making use of other Internet protocols like FTP. In the SSL communication process, a
client (as usual) contacts the server. The server responds by sending the client its public
key certificate. The client validates the signature on the certificate (assuming it has access
to the certifying authority public key), generates a symmetric (session) encryption key (this
type of key has the property of being able to both encrypt as well as decrypt clear text),
and uses the server’s public key to encrypt the symmetric key, so it may be sent back to
the server. To achieve client authentication, the client would send his certificate back to
the server.
While the SSL specification provides for both client and server authentication (as well as
data privacy and integrity), in its first implementation, as used in Netscape products (and
described above), client authentication was not implemented. The Secure HTTP effort by
CommerceNet (a consortium of high technology companies attempting to bring about more
rapid utilization of the Internet for commerce activities) implemented both client and server
authentication. In the S-HTTP model, security is achieved at the application level; i.e.,
HTTP has been expanded to incorporate security. In some respects this makes SHTTP
more powerful than SSL in that the negotiation features of HTTP apply to security. Thus,
for instance, any combination of privacy, authentication and data integrity checking may be
specified in a client/server interaction---whereas in the current production versions of SSL
one is forced to always encrypt.
Currently SSL and S-HTTP do not interoperate. Implementations of S-HTTP based clients
and servers in commercial products are few. Implementations of SSL servers have been
much more widespread. SSL based servers and clients which can perform client
authentication are currently under beta-test.
Secure Telemetry Data Sharing
Secure data sharing will occur using the client server paradigm. The server will hold the
files to be shared. Each file has an access control list associated with it, indicating the
individuals who may access each file. The list can be maintained as simple flat file, or a
more complex structure such as a relational database, if other information regarding the
files to be shared must be maintained (e.g. whether a file accessed should be transmitted
encrypted or unencrypted, etc.).
Clients and servers are connected to the Internet. Access to the telemetry files is managed
by a SSL or S-HTTP based Web server. A client wishing to access data connects to the
server using his WWW SSL or S-HTTP capable Web browser. Authentication of both the
server and client are necessary before data transactions can occur. The client must know
that the server is authenticated so he can be assured the data he is receiving is "legitimate."
The server must authenticate the client, so he can be assured that he is truly dispensing
data to the individual listed on the access control list.
The CGI programming environments for both SSL and S-HTTP allow for such security
authentication to be passed to a cgi-bin program. Thus, a cgi-bin dispatcher program can
receive the authentication information, and if authentication is indicated, present the client
with a form which allows him to request data. The dispatcher will typically scan all access
control information and present the authenticated user with a list of all possible files he
may access. The user may then decide whether to select all files, or only a subset.
Once the files of interest are selected, the user must indicate a desired return format for the
data (e.g. HTML table, space delimited ASCII file, etc.) and a return MIME type which
will allow the secure client browser’s helper application to detect data returned from the
server. Typically, the helper application will merely collect data and place it in a format so
it might be processed by an On Line Analytical Processing (OLAP) tool on the client
platform.
Advanced Concepts
With the emergence of “plug-ins” (like a helper application, except the plug-in
communicates with the client browser via a programmatic API), the possibility exists to
create an entire telemetry post processing environment that can be accessed through the
web browser. One can also conceive of creating JAVA applets that could perform specific
analytical functions on returned data. These applets could be offered by the server holding
data to be analyzed. Finally, access control functions could be expanded to allow analysts
to place post processed data back onto the server so specified individuals could gain
access to the analysis results.
Conclusions
The technology is now in place for geographically remote development teams to perform
analysis on a sensitive telemetry product using S-HTTP/SSL client/servers on the Internet.
Server CGI programs are the key to managing the data product securely; while client
helper applications assure that data accessed can be stored and presented on the analysts
local computer.
References
[1] Tim Berners-Lee, Hypertext Transfer Protocol, Draft, June 1993, Internet Engineering
Task Force.
[2] N. Borenstein, N. Freed, MIME (Multipurpose Internet E-mail Extensions) Part One:
Mechanisms for Specifying and Describing the Format of Internet Message Bodies,
September 1993, RFC1521.
[3] Tim Berners-Lee, Daniel Connolly, Hypertext Markup Language (HTML), June 1993,
Internet Engineering Task Force.
[4] Arthur van Hoff, Sami Shaio, Orca Starbuck, Hooked on JAVA, February 1996,
Addison-Wesley Publishing Company.
[6] Bruce Schneier, Applied Cryptography, 1994, John Wiley & Sons, Inc.
[8] Alan O. Freier, Philip Karlton, Paul C. Kocher, The SSL Protocol, Version 3.0, March
1996, Internet Draft.
[9] E. Rescorla, R. Schiffman, The Secure Hypertext Transfer Protocol, May 1996,
Internet Draft.
THE S-BAND COAXIAL WAVEGUIDE
TRACKING FEED FOR ARIA
ABSTRACT
This paper contains a description of a new technology tracking feed and a discussion of
the features which make this feed unique and allow it to perform better than any other
comparable feed. Also included in this report are measured primary antenna patterns,
measured and estimated phase tracking performance and estimated aperture efficiency.
The latter two items were calculated by integrating the measured primary patterns.
KEY WORDS
INTRODUCTION
LJR Inc. has recently completed the design and construction of new tracking feeds that are
being used by the USAF in its ARIA aircraft. These prime focus feeds have been designed
to meet a comprehensive and difficult set of specifications of which an abbreviated list is
shown in Table 1.
The feed is the result of two years of research into the design of tracking feeds using
coaxial waveguides. In this time, LJR has developed software which can not only
accurately predict the radiation characteristics of the feed but can also optimize its design.
DESCRIPTION
The feed is an example of a rho-theta tracker. Its basis is a set of two concentric coaxial
guides and a shaped aperture. The inner guide is used to launch/receive the TEll mode
which produces the Σ Channels. The outer guide receives the TE21 mode which produces
the ∆ Channels. Please refer to Figure 1.
The aperture was computer optimized to balance the requirements of
Included in the aperture design, are two outer choke sections, the radome, a shaped inner
conductor and two inner inductive choke sections. Please refer to the center drawing in
Figure 1.
Four discrete probes are used to generate/receive the two TEll polarizations
required.These four probes are fed via coaxial lines which contain an inner conductor
which is shaped so as to match the probes into 50 Ω. These lines are connected via cables
(item 5 in Figure 2) to two external 180o, 3 dB hybrids. (Item 7 in Figure 3). The outputs
of the these hybrids are connected to a 90o, 3 dB hybrid (item 6 in Section A-A of Figure
3) which converts the linear polarizations into LHCP and RHCP Σ channels.
A stripline circuit is used to receive the two TE2l polarizations. This is item 24 in Figure 1.
The two outputs from this circuit are connected to the 90o, 3 dB hybrid (item 6 in Figure 1
and Section B-B of Figure 3). This converts the linear polarizations into LHCP and RHCP
∆ channels.
Item Specification Enhanced Performance
Frequency: 2..2-2.4 GHz range extends past 2.5 GHz
Outputs: LHCP and RHCP
Σ and ∆ Channels
f/D: Optimized for f/D = 0.433. range of ratios can be used
Primary ΣChannel Gain 8 dBi
Primary ∆ Channel Gain 5 dBi
Primary ∆ Channel Null Depth -30 dB Referenced to Σ
Primary Σ Channel Axial Ratio < 1 dB at Boresight
2nd-ary Σ-∆ Phase Tracking o
±10 over any 100 MHz band ±5o over whole band
Secondary ∆ Null Depth <-35 dB Referenced to Σ <-46 dB
Secondary Boresight Shifts <0.5o <0.15o
Secondary Σ Channel Gain >30 dBi in a 7 foot dish >31.1 dBi
Σ Antenna Temperature <160 K <98 K
VSWR: 1.5:1
Transmit Power: 50 Watts
Weight: ≤ 8 pounds actual weight is 7 pounds.
Size: Diameter: ≤ 8.6 inches.
Length: ≤ 8.5 inches.
Connectors: N-type.
( 8.3) (6.49)
A B
LH
CP
RH
CP
F3
3
65 US 0-1
7- AF
14
94 /A
00
-C FM
0
-2 C
27
008
1
LH
CP
RH
CP
A B
.759
ø 8.276 8.5
-3
UP UP
-3
-3
-3
Figure 3: Two Section Views showing the External Hybrids and Their Connections.
DESIGN FEATURES
The ARIA feed has many special features. A list of the important ones is given below.
• The waveguides have been designed so that the phase velocity of the TEll mode in the
inner guide is the same as that of the TE21 mode in the outer guide. Due to this and the
alignment of the TEll probes with the TE21stripline circuit, the waveguide dispersion
does not harm the phase tracking of the Σ and ∆ antenna patterns.
• The use of the TE2l mode naturally gives deep, symmetric and stationary nulls on the
boresight of the feed. All that is required is that the feed be made circularly symmetric
and that the mode be launched cleanly.
• The Σ and ∆ feed circuits are completely separated. In other feed systems (e.g. cross
dipoles) each radiator receives/transmits both the track and sum signals. Thus a
complicated comparator network is required to separate the track signal from the sum
signal. This incurs added resistive losses. The separate circuits of the ARIA feed
avoids this.
• The TE2l stripline circuit containing 8 voltage probes is placed a quarter of a TE2l
guide wavelength away from the back short (i.e. at a voltage maximum). The
waveguides have been designed so that this is also a half TEll guide wavelength from
the backshort (i.e. at a voltage minimum). Thus the stripline circuit does not interfere
with the TEll operation. Also the TE2l mode is cut-off in the inner tube and does not
reach the TEll probes. As a result of these design features, the two circuits are
completely isolated at the mid-frequency and have very little coupling at the frequency
band edges.
• The outer choke rings perform in the same way as those in high efficiency (non-
tracking) ring feeds. Thus, this tracking feed gives almost as much aperture efficiency
as a ring feed.
• The TEll combining circuits are made from external off the shelf hybrids. These are the
cause of ≈ 75% of the loss in the sum channel. These also limit the amount of power
that can be transmitted by the feed. Better noise temperatures or power handling
capability can be attained by simply replacing this “off the shelf” outer circuit without
changing any of the internal circuit.
PRIMARY PATTERNS
Figure 4 contains Σ and ∆, E and H plane antenna patterns of the feed at four frequencies
within the band. Please note both the Σ and ∆ channels have almost equal E and H
patterns in the range of the dish (± 60o for f/D=0.433). This is desirable for high aperture
efficiency. Please note also the deep nulls and co-alignment of the nulls in the ∆ patterns.
It is possible to produce a good estimate for the feed's performance within a dish by
knowing the amplitude and phase of its primary patterns. Thus these were measured at 72
angles and 101 frequencies within the 2.2 to 2.5 GHz band. This data was then used in
appropriate formulae to calculate the aperture efficiency and the phase tracking of the Σ
and ∆ secondary patterns. The results of this analysis are shown in Figures 5and 6.
Included in Figure 6 are phase errors directly measured during the testing of the
secondary patterns inside the ARIA frequency band.
The aperture efficiency does not include any resistive or VSWR losses. It merely
represents the efficiency of the primary pattern illuminating the dish. It includes the effects
of spillover and aperture tapering. Note the sum pattern's estimated efficiency is very high
when one compares it to what can be expected from a ring feed comprised of a circular
waveguide with three rings. This would typically have an optimum aperture efficiency of
77%.
Figure 4: Primary patterns.
Errors in the phase tracking between the Σand ∆ patterns directly cause angle errors in the
detected target's position. The relationship is one to one, i.e. every degree of tracking error
causes a degree error in the target's position. Thus, it is important to minimize this error. It
is generally accepted that a tracking system with a 10% cross-talk is a good tracker. This
amount of cross talk corresponds to a phase tracking error of 5.7o. Thus the estimated
phase tracking error of ±4o means that the ARIA feed will provide excellent tracking
performance.
Figure 5: Estimated Aperture Efficiency.
CONCLUSION
LJR Inc has developed a new type of tracking feed with many advantages over other
available feed technologies. The feed was developed by first developing software tools
that could accurately predict its performance and could be used to optimize the feed's
design. The result of this research is a highly reliable and easy to construct feed which
meets a tough set of specifications and in many cases greatly exceeds the requirements.
TRACKING RECEIVER NOISE
BANDWIDTH SELECTION
Moises Pedroza
Telemetry Branch
White Sands Missile Range
WSMR, NM
ABSTRACT
The selection of the Intermediate Frequency (IF) bandwidth filter for a data receiver for
processing PCM data is based on using a peak deviation of 0.35 times the bit rate. The
optimum IF bandwidth filter is equal to the bit rate. An IF bandwidth filter of 1.5 times the
bit rate degrades the data by approximately 0.7 dB1. The selection of the IF bandwidth
filter for tracking receivers is based on the narrowest “noise bandwidth” that will yield the
best system sensitivity. In some cases the noise bandwidth of the tracking receiver is the
same as the IF bandwidth of the data receiver because it is the same receiver. If this is the
case, the PCM bit rate determines the IF bandwidth and establishes the system sensitivity.
With increasing bit rates and increased transmitter stability characteristics, the IF
bandwidth filter selection criteria for a tracking receiver must include system sensitivity
considerations. The tracking receiver IF bandwidth filter selection criteria should also be
based on the narrowest IF bandwidth that will not cause the tracking errors to be masked
by high bit rates and alter the pedestal dynamic response.
This paper describes a selection criteria for a tracking receiver IF bandwidth filter based
on measurements of the tracking error signals versus antenna pedestal dynamic response.
Different IF bandwidth filters for low and high bit rates were used.
KEY WORDS
The criteria for selecting the optimum IF bandwidth filter for a data receiver to process
different modulation schemes and bit rates is well documented1. The criteria for selecting
the IF bandwidth filter for a tracking receiver is the lowest noise bandwidth that will allow
the tracking system to acquire and track a target. The tracking receiver IF bandwidth filter
has received little attention because most of the time the tracking receiver and the data
receiver is the same receiver; therefore, the IF bandwidth has to be whatever the data
demands. Also, the bit rates have been in the kilobit range without any thoughts on how it
affects the system sensitivity.
With bit rates increasing up to millions of bits per second there are concerns that the
selection of the data receiver IF bandwidth filter will decrease the system sensitivity for
tracking purposes. This means that if the tracking receiver sensitivity is based on the data
requirements it may be possible to lose autotrack earlier than if a narrower IF bandwidth
filter had been selected.
DISCUSSION
White Sands Missile Range, Telemetry Branch, undertook a task to determine the criteria
for selecting the best second-IF bandwidth filter for a tracking receiver for a given bit rate.
The objective was to determine the lowest IF bandwidth filter that will yield the best
system sensitivity without affecting the tracking dynamics of the pedestal. The tests were
undertaken by the demand for bandwidth due to increases in the bit rates from 256 kbps
up to 13 Mbps resulting in wider IF bandwidth requirements. The problem addressed was
to determine the narrowest IF bandwidth filter that will process the tracking error signals
without distortion in an environment where the bit rate is very high.
SYSTEM SENSITIVITY
Table 1.0 lists the noise contributed by different IF bandwidth filters (B2if). It can be seen
that the noise contribution increases considerable as the bandwidth increases.
Table 1.0. Noise contributed by different IF bandwidth filters based on 10∗ Log10 B2if
Table 2.0 lists the comparisons of the expected system sensitivities for different IF
bandwidths and the bit rates.
Bit Filter Criteria IFBW Expected Receiver
Rate 1.5 ∗ BR 2 ∗ BR (Hz) Sensitivity(dBm)
256k 384k 512k 500k -117.5
500k 750k 1.0M 1.0M -115.7
1.0M 1.5M 2.0M 2.0M -112.72
1.8M 2.7M 3.6M 3.3M -109.29
3.2M 4.8M 6.4M 6.0M -106.7
4.0M 6.0M 8.0M 6.0M -106.7
6.0M 9.0M 12.0M 10.0M -104.48
10.0M 15.0M 20.0M 15.0M -102.7
13.0M 19.5M 26.0M 20.0M -101.47
Table 2.0 Comparisons of expected System Sensitivities for different bit rates
TRACKING SYSTEM
A 15 - foot diameter antenna tracking system with a Single Channel Monopulse Feed
Assembly Unit was used to test the pedestal dynamic response and for conducting system
sensitivity measurements. The antenna has a 3-dB beamwidth of 2.10 .The servo response
capabilities has three different acceleration servo bandwidths (LOW,MEDIUM, AND
HIGH). The LOW response is 4 degrees/sec/sec, MEDIUM is 8 degrees/sec/sec, and
HIGH is 30 degrees/sec/sec.
The Left Hand Polarization and Right Hand Polarization tracking receivers used IF
bandwidth filters ranging from 100 kHz to 10 MHz. The tracking system was tested for
dynamic responses using LOW,MEDIUM, and HIGH servo bandwidths in a strong and in
a weak signal environment
TRANSMITTING SOURCE
An S- band transmitting system tuned to 2250.5 Mhz and modulated with different PCM
bit rates (NRZ-L, randomized) was located 2000 feet away and oriented towards the
tracking system. Bit rates from 256 kbps to 13 Mbps were selected for the tests.
RESULTS
Table 3.0 shows the results of the tests for different IF bandwidth filters for different bit
rates. Figures 1.0 and 2.0 are printer plots of the error signals for bit rates of 256 kbs and
for the 13 Mbps. The 256 kbps bit rate was selected for display since it as a very common
bit rate. The 13 Mbps was selected since it is the highest bit rate tested at WSMR. The
error signals were monitored at the receivers AM output.
Table 3.0 Error signals and servo response for different bit rates and IF bandwidths.
The starred (*) IF bandwidth filter is the recomended filter to optimize the system
sensitivity. Table 4.0 shows the improvement by optimizing the IF bandwidth filter.
Table 4.0 Comparisons of expected System Sensitivities for different bit rates
Bit rate : 256 kbps
IF-BW: 100 kHz
Figure 1.0 Plots of the tracking error signals for IF-BWs of 100 kHz, 300 kHz, and 500 kHz
Carrier frequency of 2250.5 MHz modulated with 256 kbps.
Bit Rate: 13Mbps
IF-BW: 100 kHz
Figure 2.0 Tracking error signals passing through a 100 KHz, 1.5 MHz, and 10 MHz
IF Bandwidth filters. Carrier frequency is 2250.5 MHz modulated with 13 Mbps.
CONCLUSIONS
The error signal plots indicate that for a very low IF bandwidth filter (100 kHz) the error
signals are masked in noise. The system will still autotrack with LOW and MEDIUM
servo acceleration bandwidths. The HIGH servo acceleration bandwidth is questionable.
The results of the tests indicate that it is not necessary to use the same second IF
bandwidth filter for tracking purposes that is required for video data. Performing dynamic
tests on the pedestal while using a radiating source and smaller IF bandwidth filters will
assist you in determining the best filter for the given bit rate. For the particular tracking
system used for these tests, the HIGH servo acceleration is very sensitive to noise. The
user must include the expected pedestal dynamics to assist him in determining the IF
bandwidth filter. The extra effort in performing pedestal dynamic tests will pay off in
selecting the best parameters for the best system sensitivity.
REFERENCES
ACKNOWLEDGEMENT
Jerry W. Johnston
TYBRIN Corporation
Steve LaPoint
U.S. Army Kwajalein Atoll (USAKA)/
Kwajalein Missile Range (KMR) Safety Division
ABSTRACT
This paper presents the interim results of an effort to corroborate analytic model
predictions of the effects of rocket motor plume on telemetry signal RF propagation. When
space is available, telemetry receiving stations are purposely positioned to be outside the
region of a rocket motor's plume interaction with the RF path; therefore, little historical
data has been available to corroborate model predictions for specific rocket motor types
and altitudes. RF signal strength data was collected during the flight of HERA target
missile by White Sands Missile Range (WSMR) using a transportable telemetry receiving
site specifically positioned to be within the rocket plume region of influence at
intermediate altitudes. The collected data was analyzed and compared to an RF plume
attenuation model developed for pre-mission predictions. This work was directed by the
US Army Kwajalein Atoll (USAKA)/ Kwajalein Missile Range (KMR) Safety Division.
KEY WORDS
This paper presents the interim results of an effort to corroborate analytic and empirical
model predictions of the effects of rocket motor plume on telemetry signal RF propagation.
RF link margin analysis is extremely important for premission planning and site positioning
of critical range safety telemetry and other safety related RF based support
instrumentation. This is especially true for the support position selection of mobile sensors
in support of hazardous operations where physical space or instrumentation resources are
limited and the potential of rocket motor plume attenuation may degrade range safety
sensors at specified locations. A detailed understanding of the effects of the missile
exhaust plume on RF propagation is important in order to determine the minimum offset
locations for such sensors. Current models tend to be overly conservative in the
assessment of attenuation and tend to overly restrict operational siting selection.
PHYSICAL CONSIDERATIONS
Several sources were reviewed which have developed theoretical plume attenuation
studies and several which have developed empirical data comparisons. A consistent thread
through the analytical studies indicates that the predicted attenuation experienced by a
direct path from the source to the receiving antenna through free electrons of the plume
and the combustion products including aluminum is much greater than that actually
experienced in practice. Where the direct propagation path predicts attenuation in the 60 to
100 dB region, experimental data shows maximum attenuation in the 30 to 60 dB region.
For this reason, it is believed that the RF signal reaching the missile or ground station is
not in the direct RF path but that predominantly due to diffraction around the plume. Even
when the plume is not directly in the RF path, RF losses may be incurred due to Fresnel
interference of a reflected signal from the plume interfering constructively or destructively
with the direct path. The resultant signal loss is a combination of diffraction and/or Fresnel
interference. This effect has been modeled by treating the plume as an opaque strip with
associated Fresnel diffraction and interference properties. It is noted that the opaque strip
is affected by altitude. Whereas the extent of the plume may be 1 to 1.5 times the exit
nozzle at low altitudes, this may expand to 5 to 6 times the exit nozzle diameter at high
altitudes.
The aspect angle is defined as the angle from the missile center line axis to the direct line
of sight of the RF propagation path. The aspect angle is generally determined from a
missile reference point or the center of mass and it may be important to adjust the aspect
angle definition due to the length and diameter effects of the missile, as shown in Figure 2,
if the length, diameter and antenna positions of the configuration are significant. Missile
configurations with high length to diameter ratios tend to be less effected by the need for
the use of the adjusted aspect angles.
DATA COLLECTION
Telemetry data was collected at WSMR during the 24 April 95 HERA Launch from LC-32
using one of the WSMR Transportable Telemetry Acquisition System (TTAS) for
purposes of plume effects analysis. The van was located 2.15 miles to the rear of the
launcher site simulating telemetry reception at land limited facilities. The telemetry
reception site parameters are given in Table 1. Two telemetry channels were recorded
during the mission, RF1 and RF2. Channel RF1 contained the safety related critical
telemetry parameters.
The HERA flight configuration consists of a 5 watt transmitter coupled to dual antennas
through a hybrid combiner.
Table 1. Telemetry Site Characteristics
The TTAS uses a radscan feed antenna (vertical and horizontal dipoles) with a gain of 31
dB and noise figure of 1.4 dB based on a noise temperature of 400°K. The dynamic
pointing error of this antenna is approximately 0.5 and the 3 dB beamwidth is 4°. The
antenna feeds a preamplifier which has a gain of 30 dB which in turn provides signals to a
90° hybrid multi-coupler which sums the horizontal and vertical linear polarized signals
into an effective left and right hand circularly polarized signal. Between the preamp and
the hybrid multi-coupler is approximately 30 feet of RG214 cable with an associated loss
of 8 dB. The multi-coupler has a gain of 4 dB and a noise figure of 4 dB. The multi-
coupler feeds into the receiver through a cable with 1 dB of loss. The system sensitivity is
-115 dBm.
The TTAS telemetry data was provided in the form of analog strip charts with the recorded
RHCP and LHCP received signal levels (RSL) for both RF1 and RF2. These signals are
shown in Figure 2. These charts were digitized and converted from telemetry units to
engineering units for analysis using pre- and post-mission calibrations indicated on the
individual strip charts. Since calibration levels changed over the recording interval, a
average of the pre- and post-mission calibration levels were used for the analysis.
The maximum difference in the calibration levels were noted as:
DATA ANALYSIS
An RF link margin analysis was conducted using the RFLINK program based on the above
telemetry transmitter and receiver parameters. The effects of the plume effluent on the
received telemetry signal were also modeled using the TYBRIN developed plume
attenuation model for the HERA-B and the measured data used to corroborate the model
output. The predicted Received Signal Levels (RSL) are shown in Figure 4 for RF1 RHCP
and RF2 LHCP in comparison with the recorded received signal levels. The RF line of
sight aspect angle relative to the missile center line is plotted in Figure 5 with the predicted
and actual relative RSLs for reference. The link margin analysis used the missile velocity
vector as the basis for the aspect angular determination since data for the full six degree of
freedom model was not available for this analysis. It is noted that the minimum aspect
angle of 4.0° occurred at approximately 75 seconds of flight.
Plume effects are noticeable for aspect angles of less than approximately 9.5 degrees. The
severe attenuation at 40 seconds is attributed to an antenna pattern null and the signal
variations between approximately 70 to 95 seconds is attributed to missile maneuvering
and associated nozzle deflections. These effects are not modeled in this analysis. Burnout
of stage 1 and ignition of stage 2 are clearly seen at 60+ seconds and burnout stage 2
results in approximately 13 dB increase in signal level. These features are indicated in
Figure 6.
The flight azimuth profile relative to the receiving site during the 70 to 95 second period is
shown in Figure 7. The changes in the RF signal during this period can be directly
correlated with the motor nozzle during this period. The RF attenuation reflects the nozzle
motion during the turn where the nozzle is first pointed toward the receiving site to initiate
the turn resulting in a large signal loss and the then away to stop rotation and null turn rates
resulting in near nominal RF signal levels.
Since the minimum aspect angles were indicated at no less than 4.0°, the maximum RF
attenuation on axis could not be determined; however, with the nozzle deflection data
indicated, maximum attenuation values are estimated in the order of 25 dB.
Figure 2- RF1 and RF2 Strip Chart Data
Figure 3- RF1 and LHCP Received Signal Level Comparison
SUMMARY
Future planned activities include modeling the missile antenna pattern and nozzle position
for use with a full six degree of freedom missile dynamics model.
DESIGNING AN ANTENNA/PEDESTAL FOR TRACKING
LEO AND MEO IMAGING SATELLITES
W. C. Turner
Electro-Magnetic Processes, Inc.
Chatsworth, California
ABSTRACT
This paper takes one through the processes followed by a designer when responding to a
specification for an earth terminal. The orbital parameters of Low-Earth Orbiting and
Medium-Earth Orbiting (LEO and MEO) satellites that affect autotracking and pointing of
an antenna are presented. The do’s and don’ts of specifying (or over specifying) the
antenna feed and pedestal size are discussed. The axis velocity and acceleration rates
required of a Y over X and El over AZ type pedestal are developed as a function of
satellite altitude, radio frequency of operation, and ground antenna terminal diameter.
Decision criteria are presented leading to requiring a tilt mechanism or a third axis to cover
direct and near overhead passes using an El over Az pedestal. Finally, the expressions
transforming Y over X configuration position angles to azimuth and elevation axis position
angles are presented.
KEY WORDS
BACKGROUND
Although the normal path of a satellite is elliptical, most imagery satellites are maneuvered
into a low altitude (400 to 800 km) circular orbit. Future global communication satellite
clusters will be put into mid-altitude circular orbits of 800 to 2,000 km. The cluster of GPS
satellites are maintained in precise orbit at an altitude of 20,190 Km. Figure 1 shows the
apparent peak overhead velocity of circularly orbiting satellites for satellite altitudes
ranging from 300 to 2600 km. The mathematical expression for this parameter is
developed from Newton’s law of universal gravitation, F = m(h + RE)2 = GMm/(h + RE)2,
from which;
VA . 36173.6 (1)
h(h + RE)½
where VA is the apparent angular velocity (from the observer’s position on the surface of
the Earth) in degrees/second, RE is the earth’s radius in km and h is the altitude of the
satellite in km. This is the peak velocity required of the X-axis of a Y over X pedestal. or
the elevation axis of an El over Az pedestal. Figure 2 shows the crossing velocity of a
satellite vs. altitude with 10 degree X-angle offset. The approximate closed expression for
this parameter from curve fitting is:
VC . 2000 (2)
h(h + RE)½
This is the peak velocity of the Y (or Cross-Elevation) axis of an X-El/El pedestal. These
benign rates are easily achieved by Y over X type pedestal but these type pedestals may
not be available nor affordable if available. Consequently, EL over AZ type pedestals are
usually specified with mechanical features that assure coverage of the “keyhole” to
accommodate near or direct-overhead passes.
In practice, the engineer sizes the antenna, then sizes the pedestal to swing the antenna.
Antennas ranging in diameter from 1 to 12 meters and weighing from 25 to 550 kilograms
require 1 of 5 different size pedestals to support and point them. Larger antennas requiring
pedestals with greater than 25 horsepower drive trains are excluded from this discussion.
The antenna can be sized by merely specifying the required Effective Isotropic Radiated
Power (EIRP) and uplink frequency, and the required G/T at a specific elevation angle,
and downlink frequency. If these parameters are not available, provide the following:
In order to compute the induced rates (hence horsepower and weight) of the
antenna/pedestal due a satellite in a circular orbit, the following information should be
provided:
From these element sets, the satellite trajectory, as well as induced velocity and
acceleration of each axis of the pedestal, can be calculated. For an El over Az pedestal, the
azimuth axis velocity and acceleration become significant when accommodating near or
direct-overhead passes. Since the required peak acceleration is related to the required peak
velocity, the dynamic analysis to establish peak velocity is developed first.
Pedestal Velocity
Frequently it is specified that downlink data flow not be interrupted during a direct or
near-direct overhead satellite pass. Because the free space attenuation and atmospheric
losses are at a minimum when the satellite is directly overhead, some diminution of ground
antenna gain may be acceptable. Figure 3 is a plot of the 3-dB (½ power) and 6-dB
beamwidth loss curves, vs. Df the product of the antenna diameter (in meters) and the
operating frequency (in GHz). A frequently used expression for the half-power beamwidth
of a paraboloidal antenna is:
BW = 708 = 21 (3)
D Df
It is seen that for a 10-meter reflector operating at 2 GHz (Df = 20), the half power
beamwidth is approximately one degree, and for a reflector 2-meters in diameter, at
15 GHz (Df = 30), the half-power beamwidth is 0.7 degrees. The 6-dB beamwidth curve
shows that it is wider than the 3-dB beamwidth by a factor of %& 2 . The beamwidth plots
do not extend beyond a Df factor of 150 because with present technology, it is not
financially practical to supply a pedestal that would maintain accurate pointing of the
related narrow antenna beams on a LEO satellite. Also plotted on Figure 3 is the azimuth
velocity K factor which is expressed as,
The antilog of this factor is used in the next plot to show how much the azimuth velocity
must be increased to accommodate near-overhead satellite passes. Figure 4 is a plot of
peak azimuth axis velocity required to negotiate an overhead pass within 3-dB or 6-dB of
the beam peak, vs. satellite altitude with the factor Df as a parameter. The equation used is
Where VA is the apparent velocity vs. satellite height from Figure 1 (equation 1) and BW is
the 3 dB or 6 dB antenna beamwidth vs the prameter Df from Figure 3.
The curves show the azimuthal velocity required to keep the antenna at an elevation angle
that assures no more than a 3-dB (or 6-dB) loss in received downlink power. These curves
can be used to quickly determine drive motor(s) horsepower (HP), and gear ratios, as well
as whether a pedestal tilt mechanism or a third pedestal axis is required. It should be noted
that speed is directly proportional to horsepower and that for large pedestals, horsepower
of electric motors and solid-state servo amplifiers is limited to 25 or so HP for a dual-drive
system.
Pedestal Acceleration
A good approximation for calculating the peak expected acceleration is to use the
following expression for a crossing vehicle from Reference (1):
.. .
2MAX = 0.65(2MAX)257.3 (6)
. ..
Where 2MAX is in radians/second, and 2MAX is in degrees/sec/sec -- these are the peak rate
and acceleration use in calculating the servo lag errors when computing the antenna
autotracking and pointing accuracies. For a near overhead satellite pass, the peak velocity
of the elevation axis occurs approximately 30 degrees before and after zenith
(Reference 1); while the peak velocity of the azimuth axis occurs at zenith. However, the
peak azimuth axis velocity is 10 to 20 times higher than the peak elevation velocity. The
peak azimuth acceleration occurs at elevation angles above 80 degrees where the torque
induced by wind on the azimuth axis is minimal. The elevation axis acceleration peaks at
zenith and is more than an order of magnitude lower than the peak azimuth acceleration.
The wind-induced torque is negligible on the elevation axis servo at this attitude.
The motors and servo amplifiers are sized based on the torque required to accelerate the
motor armature and the antenna mass, and accommodate the torque induced by the
specified peak winds. Peak torque occurs about the azimuth axis when the antenna is
elevated at zero degrees. Peak torque about the elevation axis is developed when the
antenna is elevated at 60 or 120 degrees. This torque is proportional to the square of the
wind velocity and to the cube of the reflector diameter. For massive counterbalanced large
diameter antennas, the wind induced torque is 100 times the torque required to accelerate
the antenna at 6 deg/sec/sec. When the conservation of primary power and motor/servo
size are of concern, the advantage of employing a protective radome becomes obvious.
When the antenna is not radome protected, peak acceleration must be software limited
because of the excessive available torque.
Pedestal Horsepower
In order to minimize HP, TW and 2 must be held to the minimum required to operate
successfully. As previously stated, TW is implicitly specified by the peak operating wind
speed and the antenna diameter. Therefore, one must not specify peak operating winds nor
antenna gain (diameter) arbitrarily. Even over-specifying the minimum elevation axis
depression angle affects TW as well as TA because the antenna cannot be as closely
coupled to the elevation axis. The azimuth slew velocity (2) is typically specified much
higher than required for a LEO satellite tracker. Indeed, the peak elevation axis slew
velocity does not need to be higher than 1.5 degrees per second, as shown in Figure 1. As
shown in Figure 4, the azimuth axis velocity becomes unacceptably high when
accommodating a satellite pass through the pedestal “keyhole,” even though allowing a
6-dB loss in received signal level reduces the required peak velocity by approximately 30
percent.
The need for a high azimuth velocity is alleviated by effectively limiting elevation axis
travel to +80 degrees. This is accomplished either by tilting the entire antenna pedestal
±10 degrees or by adding a third (cross-elevation) axis which has a minimum of
±10 degrees of travel. As shown in the lowest curve of Figure 4, the maximum velocity
required by the azimuth axis is only 6.5 deg/sec for satellites orbiting at a 400 km altitude.
EMP has determined that the cost-effective approach is to add a servo-controlled third axis
to the pedestal because it significantly reduces the required horsepower (motor and servo
amplifier size) for the system, while still maintaining the mechanical stiffness of the lower
pedestal. The required peak velocity of the azimuth axis servo/motor and gearbox is
significantly reduced when operation of that axis is limited to LEO satellite passes that
require elevation axis angle of +80 degrees or less. For higher elevation angles, the
pedestal is operated in a Y over X (cross-elevation-over-elevation: X-El/El) configuration
where, as shown in Figures 1 and 2, the axis velocity rates are benign.
For higher predicted elevation angles, the azimuth axis is locked at Acquisition-of-Signal
(AOS) on the horizon, and the system is automatically configured as an X-El/El mount,
with both axes allowed to autotrack, program track, or slave to computed ephemeris
angles. The range of the X-El axis is ±11 degrees and its maximum slew rate is
0.6 degrees/second. The antenna is positioned in the program track mode from ephemeris
angle coordinate pair data generated by, and stored in, the antenna control unit. Program
tracking mode data, generated from ephemeris, is an azimuth and elevation angle
coordinate pair, which can be read at a rate of up to 20 times per second. When the
pedestal is operated in the X-El/El configuration, the antenna control unit makes the
conversions:
When configured as a Y over X pedestal, and in the autotrack mode of operation, the
control unit must make the following conversions to output true Az and El angles for
Display and Slave commands:
AzTrue = AZAOH + arctan (tan Q/cosN) (10)
CONCLUSION
All too often the supplier of antenna/pedestals is zealous to meet all specification
requirements of a potential customer, even though cost and operational savings could be
attained by establishing a dialogue regarding specific use and intent. It is prudent of the
specifier to demand a peak acceleration and velocity no greater than that required to
position the antenna at the coordinates accommodating the next satellite pass in a timely
manner and in the presence of specified peak winds. Let it be incumbent on the supplier to
propose an antenna/pedestal system that concerns the initial and future operating costs.
ACKNOWLEDGEMENTS
The author wishes to thank Mr. Charles Chandler of Electro-Magnetic Processes, Inc.
for generating graphics, and to Mr. Ron Potter of Electro-Magnetic Processes, Inc. for
editing and constructive comments.
REFERENCES
(1) Chestnut & Mayer, Servomechanisms and Regulating System Design, Vol. II. New
York, NY: John Wiley & Sons, Inc. 1955. pp. 44-49.
(2) Turner, William, C., Specifying an Antenna/Pedestal for Tracking LEO and MEO
Imaging Satellites, Proceeding of the ETC GARMISH Germany, May 1966.
ANTENNA PATTERN EVALUATION FOR LINK
ANALYSIS
Moises Pedroza
Telemetry Branch
White Sands Missile Range
ABSTRACT
The use of high bit rates in the missile testing environment requires that the receiving
telemetry system(s) have the correct signal margin for no PCM bit errors. This requirement
plus the fact that the use of “redundant systems” are no longer considered optimum
support scenarios has made it necessary to select the minimum number of tracking sites
that will gather the data with the required signal margin. A very basic link analysis can be
made by using the maximum and minimum gain values from the transmitting antenna
pattern. Another way of evaluating the transmitting antenna gain is to base the gain on the
highest percentile appearance of the highest gain value.
This paper discusses the mathematical analysis the WSMR Telemetry Branch uses to
determine the signal margin resulting from a radiating source along a nominal trajectory.
The mathematical analysis calculates the missile aspect angles (Theta, Phi, and Alpha) to
the telemetry tracking system that yields the transmitting antenna gain. The gain is
obtained from the Antenna Radiation Distribution Table (ARDT) that is stored in a
computer file. An entire trajectory can be evaluated for signal margin before an actual
flight. The expected signal strength level can be compared to the actual signal strength
level from the flight. This information can be used to evaluate any plume effects.
KEY WORDS
Bit rates, signal margin, aspect angles (Theta, Phi, and Alpha ), link analysis
INTRODUCTION
A link analysis is based on the Friis Transmission equation. Most of the time the link
analysis parameters such as a system’s G / T , transmit power, receive antenna gain, and
distance between the source and receive system are easy to obtain. The main assumption is
that the transmitting antenna gain is uniformly equal around the transmitting source. If the
assumption is true, then a link analysis for a tracking system at a particular site can yield
the expected signal margin. This assumption can lead to disastrous results if the signal
margin is below the minimum signal level required for no errors in the PCM data. Also,
since redundant receive systems are history, a better method of analyzing a tracking site is
necessary. The transmitting antenna gain should be evaluated as a function of the aspect
angles with respect to a proposed tracking site for expected signal margin.
DISCUSSION
RECEIVE SYSTEM
The telemetry tracking system is usually oriented with respect to truth north. The azimuth
angle is zero degrees at true north and rotates clockwise as shown in Figure 1a . The
azimuth and elevation angles from the receive system to the radiating source ( for this
paper, the radiating source is a missile) is determined by the distance equation and simple
trigonometry (Eq. 1-2). The coordinate system is such that for a nominal trajectory
described in x,y,z coordinates, the “+y” direction is north and “+x” direction is east.
TRANSMITTING SOURCE
MISSILE ON LAUNCHER
The transmitting source is usually in the same coordinate system as the receive system
while it is on the launcher. See Figure 1b.
N trajectory
+z +y tn +z
+y (0o)
tn-1
t1
-x (270o) +x (90o) -x +x
TM Receiving Site L
-y (180o)
These parameters are used to determine the “initial missile velocity vector” where the term
in quotes equals zero at the launcher but is used to evaluate the initial Theta (θ) and Phi
(Φ) angles from the missile to the tracking site. Theta and Phi are the two angles needed to
obtain the transmit antenna gain from the Radiation Distribution Table.
MISSILE IN SPACE
The transmitting source is visualized as contained inside an imaginary sphere with its own
coordinate system. It consists of the yaw-axis , pitch-axis and the roll-axis. The additional
information necessary for analyzing the transmitting antenna pattern consists of knowing
the initial -yaw vector orientation and the roll rate. The -yaw vector is used as the
reference start point (0 degrees) for obtaining the position on the imaginary sphere where
the vector from the tracking system “penetrates” the sphere. The roll rate describes how
the radiation source is rotating about it own longitudinal axis. The aspect angles are Phi
(Φ) along the roll plane, Theta(θ ) along the yaw plane, and Alpha (α ), determined as
1800-θ . The intersection of Theta and Phi on the ARDT yields the transmitting gain. Phi
is measured around the center of gravity of the missile from 00 to 3600 in a ccw direction
starting at the -yaw vector tip. Theta is measured from the tip to the tail of the missile from
00 to 1800. See Figure 2b.
+roll
+y +z
-yaw
MV
-yaw θ
θ QE φ φ (xm , ym , zm)
+pitch
GR +x
-x
L
QA +yaw
to tracker α
space
vector
The following calculations yield the aspect angles to obtain the transmitting gain.
1.0 Missile- to- Tracker Vector (SV). This vector is referred to in this paper as the
space vector, where
SV = (Xm -Xt) i + (Ym - Yt) j + (Zm - Zt) k (1-1)
Figure 3 shows this vector as originating at the launcher and directed to a tracking
site. The magnitude is calculated from the space position (Xm,Ym,Zm) and the
tracking system position (Xt,Yt,Zt) using the distance equation
2.0 Missile Velocity Vector (MV) on Launcher. This vector is (a) referenced to the
initial QA and QE while the missile is on the launcher and (b) after the missile is in flight
at time tn . See Figure 4a.
(a) Missile on launcher From Figures 2a and 3, calculate MV.
MV = -GR ∗ cos(90-(360-QA)) i + GR ∗ sin(90-(360-QA)) j + sin(QE) k (2-1)
where GR is the ground range as seen by the missile.
+y
MV
+z
θ
GR +x
-x
Launcher
+z SV
+y (0o)
-x (270o) +x (90o)
TM Receiving Site
Figure 3.
(b) Missile in flight
After t0 the missile velocity vector is obtained by differencing the space position
coordinates as ∆Xm, ∆Ym, and ∆Zm as shown in Figure 4a .
+roll axis
zm +z -yaw vector
+y
ym φ
xm roll vector
ω
-x +x
r
cke
-tra
L sile
-to
mis
An additional angle, omega(ω ), shown in Figure 4b is obtained from the Theta angle to
project a vector from the center of gravity of the missile to where the tracking space vector
penetrated the sphere. This vector, known as the roll vector, is in the roll plane and is
used to calculate Phi.
(3-1)
α = 180 - θ 4-1)
5.0 Roll Vector
The roll vector (RV) is necessary to obtain the Phi angle. It is located on the roll plane
and calculated from the center of gravity of the missile to the “end of the imaginary
sphere”.
a. Omega is the angle between the roll vector and the space vector from the missile
to the tracker.
ω = 90-(180- Theta). (5-1)
b. This angle is used to calculate the roll vector coefficients by multiplying them by
the tracker space vector coefficients. See figure 3.
(6-1)
Contour plots yield complete transmit antenna patterns. The levels are identified as gain
values below the highest value (reference value) of the antenna. They are plotted from 0
degrees to 360 degrees as polar plots identifying a particular “cut” at a given frequency.
These patterns identify deep nulls that may not be otherwise seen. Their major
disadvantage is that they can not be readily identified by Theta and Phi and programmed to
evaluate an entire trajectory.
An Antenna Radiation Distribution Table overcomes the above problem by digitizing the
gain values as a function of Theta and Phi. The major problem is that if the incremental
Figure 5.0 Radiation Distribution Table showing gain as a
function of Theta and Phi.
measurements are spaced far apart, nulls could be missed in the evaluation. Normal
measurement are made in increments of 0.5, 2, and 5 degrees. At WSMR, the requirement
is for measurements to be made at 2 degrees.
A sample of an antenna gain radiation distribution table is described in document “IRIG
STANDARD 253-93, MISSILE ANTENNA PATTERN COORDINATE SYSTEM AND
DATA FORMATS”1 . Phi (φ) is shown along the horizontal-axis starting at 00 up to 3600
in increments of 20,50, or 100 . Phi depicts the angle around the missile. Theta(θ) is shown
along the vertical-axis from 00 to 1800 in identical increments as Phi. Theta depicts the
angle starting at the tip of the missile to the tail of the missile.
For small missiles the Radiation Distribution Table is measured in a controlled
environment such as an anechoic chamber. The missile is oriented such that the tip of the
missile faces the transmitting source. The start angles for Phi is 00 and for Theta is 00. The
missile is rotated 3600 along it’s center (roll) axis while Theta is fixed at 00. Antenna gain
measurements are made at each predetermined increment. After the first set of
measurements Theta is increased by the predetermined increment and the Phi rotation
repeated. The process continues until the missile has been rotated 1800 in Theta .
There are occasions when the missile (or object such as an aircraft) is too large or
awkward for this procedure. In those cases, there are different ways to obtain the same
antenna gain values. In these cases the theta and phi are transformed from the measured
angles to the above alignment in order prevent a computer program from having to be
modified.
RESULTS
At WSMR, plots of the range, aspect angle, and signal margin are made for each station
scheduled to support the missile firing. If the station is acceptable there is a post flight
analysis made to compare the predicted signal margin and the actual signal received. This
information is used to evaluate any unforeseen problems such as plume effects. Figure 6.0
is an example of a flight where you can see the effect of plume attenuation.
Aspect angle
Signal strength
Aspect Angle
Predicted SS (dB)
Actual SS (d B )
Flight Time(sec)
Figure 6.0 Plume attenuation test showing actual signal strength, predicted
signal strength, and aspect angle (alpha).
CONCLUSIONS
A missile flight should include a very thorough link analysis based on how a missile
maneuvers in space. This information allows the support element to properly select
tracking sites without the need to saturate the area with tracking systems. It also allows the
project (range user) to lower his cost.
REFERENCES
ACKNOWLEDGEMENT
I wish to acknowledge the assistance and patience provided to me by Mr. Jeffrey Elliott
and Mr. Michael Winstead. Their help made the above analysis and paper possible.
DESIGN AND USE
OF
MODERN OPTIMAL RATIO COMBINERS
William M. Lennox
Microdyne Corporation
491 Oak Road, Ocala, FL 34472
ABSTRACT
This paper will discuss the design and use of Optimal Ratio Combiners in modern
telemetry applications. This will include basic design theory, operational setups, and
various types of combiner configurations. The paper will discuss the advantages of pre-
detection vs. post-detection combining. Finally, the paper will discuss modern design
techniques.
KEY WORDS
INTRODUCTION
(1) Extensive maneuvering of an aircraft or missile system which can cause a null in
its antenna radiation pattern.
(2) Polarization changes in the transmitted signal resulting from extensive
maneuvering of the aircraft or missile.
(3) Exhaust plumes from large jet engines ionize the atmosphere in the general
vicinity of the vehicle which can cause very rapid changes in the amplitude and
polarization of received signals.
(4) Ionization of the upper atmosphere also can cause rapid fluctuations in the
amplitude and polarization of received signals.
(5) Multipath reception inteference due to signal reflections over water, wet ground,
or metal structures.
Diversity combining is a technique used to add two or more receiver outputs together to
improve the accuracy of the data and provide one continuous output from the combiner.
With diversity receiving systems it is unlikely that all channels will fade simultaneously.
When fading is significant, combining can provide a dramatic increase in the system’s
signal-to-noise ratio. Diversity receiving systems can be implemented using polarization
diversity, frequency diversity, space diversity or time diversity.
Although systems can be configured to use one or more diversity combiners, this
discussion will be limited to a two-channel system using one tracking antenna equipped
with two feed elements. Each element provides an RF signal of a different frequency or a
different polarization.
Under the proper conditions this simple combiner can improve the signal-to-noise ratio of
the overall system by up to 3 dB. As shown in Figure 1, a basic combiner consisting of
three summing resistors is connected to the video outputs from channel 1 and 2 receivers.
Since each receiver’s video output originates from the same source under most conditions
it will be coherent. A one volt RMS. signal from each receiver would add up to two volts
RMS. Most of the noise in the video output is generated within each receiver and will not
be coherent. Therefore, it will not add up coherently but will add up to the square root of
the sum of the squares. If each receiver had one volt RMS. of signal and noise, the output
would be two volts RMS. signal and
______
√12 + 22 = 1.414 volts RMS. noise.
________
20 log (S1 + S2/√N12 + N22)
S/N improvement = +3 dB
This improvement is true only when each channel has equal signal-to-noise ratios.
This type of combiner does not solve the major problem, fading. The combiner output will
be degraded if the S/N ratio of either channel falls significantly below the other channel.
This results in poor and good signals being equally added. A major disadvantage is that
this type combiner works only on post detected or video signals and must be readjusted for
each modulation format.
The performance of a three resistor summing combiner can be improved by using AGC to
control the ratio of combining as show in Figure 2.
The AGC is a measure of the S/N of the receiver and is used to control the ratio of output
from the combiner. The substantially weaker signal would not appear at its output. This
improvement helps solve the fading problem but does not provide the maximum S/N ratio
improvement for different input S/N ratios.
A commercial video combiner would not be composed of variable resistors for the actual
combiner circuit but uses a specific active circuit known as an Optimum Ratio Combining
Circuit. This circuit performs per the following equation:
_____________
Sc/Nc = (S1 + S2/R)/(√(N1)2 + (N2/R2)2
R = (S1/N1)/(S2/N2)
With Optimal Ratio Combining, the two signals are added so that the poorer channel S/N
ratio is reduced relative to the better channel S/N ratio. This type of combining prevents
degradation of the combined output. This circuit provides an improvement in the combined
S/N ratio when the two input channels have S/N ratios equal to or close to one another,.
PRE-DETECTION COMBINERS
The purpose of using AGC in combining is to control the ratio of combining between
channels to prevent the loss of signal during single or alternate channel fading, and to
enhance the combiner improvement ratio.
The accuracy of AGC control of fading signals depends on the rate of signal fading and the
AGC time constants of each receiver. With the advent of highly maneuverable vehicles
and plume effects, the signal fade rates have increased beyond the capabilities of the
receiver AGC systems to track these fades. This results in AGC control voltages that are
out-of-phase or non-existent.
This problem could be reduced by decreasing the AGC time constant. However, most
receiver AGC time constants can’t be changed due to its design or AGC time constants are
required for monopulse tracking.
Whenever a signal fades faster than a receiver AGC system can track it, the IF output will
contain that portion of the fading or AM signal not removed by the AGC action. This AM
modulated signal can be input to a new combiner that contains an extremely fast AM
detector which recovers the AM or fading signal and adds this signal to the receiver AGC
signal. This creates a control signal which is a replica of the fade rate at the receiver input
which accurately controls the combining ratio.
This AM, AGC weighting combiner can be used with receivers with fixed AGC time
constants, or the receiver AGC time constant can be set to almost any time constant
required for any AM tracing data.
COMPARISON OF POST-DETECTION AND PRE-DETECTION COMBINING
Post-detection Advantages
(1) Inexpensive and simple
(2) Combines real-time data
Post-detection Disadvantages
(1) Difficult to get full 3 dB improvement due to demodulator linearity
(2) Combines both channel FM impulse noise at FM threshold
(3) Video levels must be precisely set up prior to each mission
(4) Demodulator distortion varies considerable from unit to unit
(5) Cannot be used with BPSK signals
Pre-detection Advantages
(1) Provides greater than 3 dB improvement when used in an FM system with
modulation indices greater than 1.
(2) Easy to get full improvement
(3) Easy to down convert for tape recording
(4) Easy to set up and holds for subsequent missions
(5) Uses only one demodulator for recovery
(6) Improves FM threshold and reduces FM impulse noise
Pre-detection Disadvantages
(1) Complex and expensive
COMBINER APPLICATION
This section describes three general types of telemetry combining systems: polarization
diversity, frequency diversity, and space diversity. This section includes a discussion of
two previous papers gien of these types of combining.
Polarization diversity combining is the most common type of combining used in today’s
telemetry systems. Most telemetry receiving antennas have two independent outputs
usually left and right-hand circular polarization or horizontal and vertical polarization.
A properly set up pre-detection polarization diversity combiner will allow the receiving
system to perfectly match the polarization of the incoming signal. This type of system
provides a substantial improvement over a single receiver system. This improvement
occurs because most transmitted signals are subject to multiple polarization changes during
tracking passes. Examples include polarization changes due to aircraft maneuvering,
spinning satellites, ionization due to jet engine plume, and muti-path transmissions.
Improvements of greater than 3 dB will also occur when using a polarization diversity
combining system due to uncorrelated polarization antenna patterns associated with the
transmitting antenna. In other words the transmitter antenna pattern peaks and nulls often
appear at different aspect angles for different polarizations.
Table 1 shows that the improvement of a polarizaion diversity system over a single
polarization system vs. percentage of coverage of the sphere around the antenna. The data
shows a 3 dB improvement for 50% coverage and a 5 dB improvement for 90% coverage.
The improvement is substantial and highlights the improvements obtained with a
polarization diversity system.
Table 1
Detailed analysis of antenna systems in beyond the scope of this paper. Refer to Benefits
of Polarization Diversity Reception in the reference section of this paper for additional
information.
The problem with using two antennas can be solved if each antenna is driven at slightly
different frequencies such as 1441.5 and 1452.5 MHz This scheme was used very
successfully in the system presented in the paper entitled, Diversity Techniques for
Omnidirectional Telemetry Coverage of the HiMat Research vehicle by Paul F. Harney.
The ground link hardware was improved since this system was implemented, especially in
the area of diversity combiners.
Mr. Harney used four receivers in his downlink system. Two receivers were used to
operate the antenna in the autotrack mode using right and left hand circular polarization
feeds to provide better balance in the signal. These two receivers had their AGC to operate
the autotrack antenna system. Two other receivers were used for data where their AGC
time constants were decreased to that their AGC voltages could better track the fast
aircraft maneuvers. The AGC voltage from these receivers was used to control the
combining ratio in the old style combiners.
With the new type combiner that utilizes AM/AGC control circuitry only two receivers
would be required. These two receivers can have their AGC time constant set low enough
to recover the AM tracking signal used for the autotrack antenna. The new type combiner
will recover the residual AM component from the receivers’ IF, and add this to the
receivers’ AGC level and create a combiner control signal that is a replica of the fade rate
of the antenna input. Figure 4 shows a system utilizing this concept.
Figure 4 shows the system employing both pre and post-detection combining. Pre-
detection combining offers many advantages over video combining in most applications. If
pre-detection combining is to be used, some considerations should be made for the vehicle
transmitters. The main consideration is that they have identical modulation deviation. The
combiner will phase lock the two linear IF output together in phase, but if the modulation
spectrum from each transmitter is not identical, they will not combine perfectly and
optimum results will not occur. Figure 5 shows the transmitter setup for the HiMat Vehicle
and Figure 6 shows a proposed transmitter scheme that would give optimum results for
both pre and post-detection combining.
The use of frequency diversity combining Techniques has been shown to greatly improve
system performance in some of the most difficult environments where data recovery and
constant data streams are essential to the system requirements.
With the advent of AM/AGC combiners, these systems can be implemented with the use
of only two receivers where four may have been required in the past, thereby reducing
system costs.
Another application places two antennas such that each sees the transmitting vehicle at
different times and/or directions,. In this application the combiner serves as a very fast
selector. This allows for continuous data collection.
Figure 7 shows a possible ship-board application using two antennas. The forward antenna
is directly behind the missile launch pad. This antenna is omni-directional and has low
gain. The second antenna is located at the rear of the ship where it is unable to see the
missile launch pad due to the super-structure. This is a high gain tracking antenna used to
track the missile once it clears the super-structure. The combiner located electrically
between the two receiving systems provides a constant data steam from lift-off through
tracking coverage.
The purpose of the system shown in Figure 8 is to provide the maximum S/N ratio
improvement and also have maximum immunity to signal fading. The system uses two
antennas, four receivers, and three combiners.
This system lends itself nicely to existing systems where the second antenna is the backup
antenna. The third combiner would guarantee continuous data should either the primary or
backup system fail. This system would theoretically give a +6 dB improvement over any
receiver when all receivers had equal S/N ratios
Lennox, William M., Design and Performance of an Optimal Ratio Combiner Using AGC
and AM Weighting, Volume 17, page 169, Proceedings of International Telemetering
Conference, San Diego, California, October 1981
Brennan, D. G., Linear Diversity Combining Techniques, Proceedings of IRE, June 1959
Hill, E. R., AM/AGC Weighted Predetection Diversity Combing, Volume 13, pp 215-238,
Proceedings of International Telemetering Conference, October 1977
Harney, Paul F., Diversity Techniques for Omnidirectional Telemetry Coverage of the
HiMat Research vehicle, page 649, Proceedomgs of International Telemetering
Conference, San Diego, California, October 1981
Microdyne 3200-PC Combiner Test Results for STS-4, prepared for Western Space and
Missile Center (ASFC), Vandenberg Air Force Base, California, Technical Note Number
AS 300-N-82-29
Space Diversity Combining Technique, prepared for Western Space and Missil Center
(AFSC), Vandenberg Air Force Base, California, Technical Note Number 0E600-N-84-09
Geen, David W., A Space Combining Approach to the Multipath Problem, Proceedings of
International Telemetering Conference, Volume 25, pp 765-775, San Diego, California,
October 1989
Figure 1. Simple Post Detection Combiner
Douglas O’Cull
Microdyne Corporation
Aerospace Telemetry Division
Ocala, Florida USA
ABSTRACT
This paper will discuss the design and performance of card level telemetry receivers and
combiners. This will include products that have been designed to operate in compact
computer controlled environments such as VME chassis, VXI chassis and personal
computers using ISA buses. The paper will discuss design considerations required to
overcome limitation of this environment such as noise and space. The paper will also
discuss the performance of a telemetry receiver and combiner in this environment. This
will include performance test results such as bit error rate test, phase noise measurements
and combiner improvement measurements. Finally, the paper will discuss typical
applications of card level telemetry receivers and combiners.
KEYWORDS
INTRODUCTION
Many telemetry applications today have the requirements to be small and portable. This
has been aided by the continual miniaturization of modern electronic components. The
advent of the personal computer and the single board computer in chassis environments,
such as VME and VXI platforms, has seen a natural migration to telemetry systems. For
several years many components used in telemetry systems have been available for use in
personal computers, VME and VXI environments. However, the two components that
have not been available are the telemetry receiver and combiner. Although some
manufacturers have made telemetry receivers, they have placed limits on the user that
compromised performance of the telemetry system. Microdyne recognized this and
developed a telemetry receiver and combiner that would adhere to the space and
portability requirement but would not limit the functionality of the telemetry system or
compromise the performance.
DESIGN GOALS
The following design goals were set for the telemetry receiver:
The following design goals were set for the telemetry combiner:
C Pre-Demodulator Combining
C Post-Demodulator Combining
C Single Frequency Record Down Converter
C No Larger Than a Double Width 6U VME or AT Style PC Card
DESIGN IMPLEMENTATION
The telemetry receiver design was split into two parts. All RF related functions are
performed in a shielded box to isolate it from noise sources in the environment. In
addition, the RF module is further compartmentalized for isolation between receiver
modules. The output of the RF module is a demodulated video signal that is routed to a
processing printed circuit board. The printed circuit board provides all unit control, video
processing and pre-demodulator down converting. The discussion of the telemetry receiver
RF module will be based on the on the block diagram, Figure 1.
Figure 1 VMR-2000 RF Box Block Diagram
The RF signal is routed through an isolator. This is used to provide a good input VSWR.
The signal is then routed to the first mixer for the first down conversion. The signal is then
routed to the second mixer for the second down conversion. Custom ceramic filters were
made for the first and second mixers. This provided superior out-of-band rejection while
using minimal space. The first IF frequency is 450 MHz. This allows the use of smaller
components without compromising the performance of the receiver. The effects of
switching power supply noise on the synthesizers are reduced by double and triple
regulation of the input power. The second IF frequency is dependent on the type of
receiver being built. Narrow band units have a second IF frequency of 20 MHz while
wideband units have a second IF frequency of 70 MHz. The second IF filters are 10 pole
lumped element Gaussian Filters. In order to preserve space, precision 1% components
were used in the IF Filters. This allows the use of small inductors for the IF Filter. The IF
Filters have a maximum bandwidth of 12 MHz for the narrow band receivers and 36 MHz
for the wideband receivers. The final design of the receiver provides 4 IF filters selectable
from 750 kHz to 12 MHz for the narrow band unit. The IF Filter module also provides
automatic/manual gain and AGC Time Constants functions for the receiver. The IF Filter
gain circuitry provides 110 dB of gain and 5 AGC Time Constants. The output of the
second IF Filter is then routed to the demodulator. Units containing an FM demodulator
provides medium and wide discriminators. The medium discriminator is used for IF
bandwidths of 4 MHz or less with the wide discriminator used for all others. The FM
demodulator is a quadrature style demod which provides wide demodulation capabilities
while being less susceptible to environmental noise. The demodulated video signal is then
routed to the video/control board.
The discussion of the video/control board will be based on the block diagram, Figure 2.
The demodulated video signal is routed to the video processing circuitry. This provides
tuning and deviation meters for the telemetry receiver. In addition, AC or DC video
coupling is performed in this module. The signal is then routed to the video filter module.
The receiver provides 3 active video filters and a video filter bypass. The signal is then
routed to a video amplifier that provides 63 dB of video level adjustments. The signal is
then routed to the front panel as a video output signal. The video/control board also
contains the circuitry for pre-demodulation record down converting. A filtered 20 MHz IF
signal is routed to the video/control board. The down converter contains a local oscillator
and the required mixer and filters to down convert the IF to the user specified record
frequency. AGC linearization is also done on the video/control board. The AGC signal is
routed from the RF module to the video/control board. The AGC voltage is digitized via a
12-bit Analog-to-Digital (A/D) converter. The output of the A/D converter is routed to the
linearization circuitry that contains logic and a linearization look up table. The output of
the linearization circuitry is routed to a 12-bit Digital-to-Analog (D/A) converter. The
output of the D/A converter is summed with the output of an offset D/A converter. The
offset D/A converter provides AGC zeroing functions. The summed output is routed to the
front panel as the linear AGC output. The output is scaled for a 20 dB/volt output that can
be used for a weighting signal for a telemetry combiner or as an indication of received
signal strength. Figure 3 is a plot of typical linear AGC output.
The bus interface is also contained on the video/control board. Receiver control is
independent of the bus interface and remains the same for any control environment. The
receiver has an embedded signal chip microcontroller that allows all circuitry to remain the
same with only the bus interface changing. The control bus is routed to interface circuitry
which provides data decoding. The output of the bus interface is routed to a dual port
RAM. This functions as a “mailbox” to pass control and status information between the
receiver and the control bus. The microprocessor places status data into the RAM. When
control information is written to the RAM an interrupt is generated to signal the
microprocessor that new control data is available. The microprocessor then reads the data
and configures the associated receiver module. Status information for the receiver is
obtained by reading the status A/D converter. Status is available for the AGC level, Video
level, Tuning Meter, Deviation Meter and Receiver Lock.
Performance in applications that have severe signal fading or low signal level can be
improved by using a diversity combiner. Diversity combiners sum the video (post-D) or IF
(pre-D) signal from two receivers. The sum of the signals is weighted based on the AGC
voltages from the receivers. The weighted sum means that as the signal quality increases
(more AGC Voltage) the combiner uses more of that signal. Since the data input from both
receivers is the same, then the information increases by being summed together. However,
the noise in each channel will be different and will not increase by adding them together.
This effectively increases the signal-to-noise-ratio (SNR) of the receiving system.
A block diagram of the combiner is shown as Figure 4.
The combiner receives the linear IF output, video output and AGC signals from two
receivers. The IF outputs from both receivers are routed to level control circuitry to ensure
that the IF outputs are combined at the same level. The level control circuitry provides
approximately 30 dB of range. AM extraction circuitry is also provided to compensate for
any high speed fading. This fading would not be seen in the receiver AGC due to the AGC
time constant. This signal is summed with the AGC signal from the receivers to generate
the weighting signal for each channel of the combiner. The two linear IFs are then phase
locked to each other. The signals have to be phase locked when summing the two
channels. Out-of-phase channels would cancel the data and decrease the SNR. The phase
locked IFs are then combined by the channel summers using the weighting signals to
determine the amount of each channel to combine. The combined IF is then routed to the
front panel as the Combined IF Output. The two video signals from the receivers can be
combined to give an improvement of the data from each receiver. Theoretical improvement
from a weighted ratio diversity combiner is 3 dB. The user can generally realize
improvements up to 2.5 dB.
PERFORMANCE
Card level receivers provide superior performance in what is typically a bad environment
for receiver products. Typical noise figures for the VMR-2000 product line are 10 dB. The
real test for any receiver product is the Bit-Error-Rate (BER) test. This will predict how
well the receiver will work with a given signal. Figure 5 shows typical BER performance.
This BER data was taken with a 2047 pseudo random data pattern PCM/FM modulated at
a rate of 2 Mbps with 700 kHz deviation. The receiver had a 2.4 MHz filter selected with
a 2 MHz video filter. Data encoding was NRZ-L.
APPLICATION
The typical applications for card level telemetry products are in portable telemetry
systems. In this environment the telemetry receiver and combiner can be placed in the
same chassis as the bit sync and decomutation equipment. This combined with the
available computer cards, for PC, VXI and VME bus systems, provide an excellent base
for a small portable telemetry system. These systems often times can be carried and
deployed with minimal personnel. The card level products are also excellent choices for
small flight line test systems used for pre-mission verification of telemetry transmission
systems.
CONCLUSION
Microdyne has been successful in developing a line of card level telemetry receivers and
combiners for use in small portable telemetry systems without degrading system
performance. The telemetry receivers provide tuning steps of 100 kHz, three IF Filters,
three Video Filters, two FM discriminators, an AM demodulator and a single frequency
record down converter. The receivers make these features available in a VME chassis by
occupying two 6U slots or in an AT personal computer by occupying two AT slots. A
companion combiner is available for use in signal fading environments to increase data
recovery. The combiners provide pre- and post-demodulator combining and record down
converting. The effect of the harsh environment of the VME or PC chassis has been
reduced with the use of shielding and regulation of the card level power inputs.
TDRSS COMPATIBLE
TELEMETRY TRANSMITTER
Greg Rupp
Member Technical Staff
Cincinnati Electronics
Mason, OH 45040
ABSTRACT
An S-band telemetry transmitter has been developed for Expendable Launch Vehicles
(ELV's) that can downlink data through NASA's Tracking and Data Relay Satellite System
(TDRSS). The transmitter operates in the 2200 to 2300 MHz range and provides a number
of unique features to achieve optimum performance in the launch vehicle environment:
KEY WORDS
Traditional methods for downlinking telemetry data from Expendable Launch Vehicles has
required extensive ground station support and often additional downrange support from
ARIA aircraft. The operating costs of this support have become a significant portion of the
total launch costs. Furthermore, because of limitations of current support equipment and
the higher data rates needed, downrange data can not always be delivered real time.
Instead, it is stored and used for post-flight analysis only.
THEORY OF OPERATION
System Overview
An overall block diagram of the transmitter is shown in Figure 1. All data, clock and
operate mode commands are applied to the transmitter through RS-422 interfaces. Health
and operational status of the unit is conveyed by several analog and digital telemetry
signals. Input data is processed digitally to accomplish a data format conversion and to
apply concatenated coding to improve link performance.
After digital processing, the I and Q data is converted from TTL level signals to bipolar
level before being filtered and applied to the Vector I & Q Modulator. Premodulation
filtering provides excellent spectral containment properties. Transmit frequency synthesis
is accomplished with an S-Band VCO phase locked to a lower frequency Temperature
Compensated Crystal Controlled Oscillator (TCXO). Shock mounting of the synthesizer
subassembly yields excellent phase noise performance, even under the severe vibration
profiles encountered with launch vehicles.
After being modulated the carrier is amplified through several GaAs FET Solid State
Power Amplifier (SSPA) stages. The power is then spit between two paths and input to a
variable phase shifting network. Following the finals SSPA gain stages the two RF paths
are recombined and the RF power is either all directed out of one of the two output
Figure 1. Telemetry Transmitter Block Diagram
antenna ports for directional operation or the power can be evenly divided between the
antenna ports for omni operation. The RF routing mode is determined by a two bit
command input via the RS-422 interface.
This design provides separate clock inputs for the I and Q data, which allows for
asynchronous QPSK operation. This has the advantage of permitting immediate distinction
of the data channels at the ground station demodulator. Digital processing of the input data
is performed in a Field Programmable Gate Array (FPGA) prior to modulation. Within the
FPGA a format conversion from NRZ-L to NRZ-M is applied to prevent inversion
ambiguity when demodulated. In addition, Forward Error Correction (FEC) in the form of
K=7, Rate=1/2 and Periodic Convolutional Interleaving (PCI) are applied to the data to
improve link performance.[1] FEC and PCI can be activated separately for each data
channel through the use of FEC and PCI On/Off commands. Also included in the digital
processing is the capability to switch the transmitter between QPSK and BPSK. BPSK
mode is accomplished through the use of a 2-channel multiplexer which allows the I
channel data to be applied to both the I and Q channel outputs of the FPGA.
The operating status of the unit can be monitored through the following telemetry outputs:
Frequency Synthesizer
The frequency synthesizer consists of a TCXO operating at 1/64 the transmit frequency,
multiplied up to the operate frequency with an S-Band VCO and a phase lock loop circuit.
The unique method of shock mounting the frequency synthesizer provides superior phase
noise performance, even under severe launch vibration profiles. Shock mounting makes it
possible to meet phase noise requirements of less than 3.0 degrees rms for both vehicle
launch and ascent vibration conditions. Figures 2 and 3 show the phase noise response
under static and random vibration (12.9 g rms) conditions respectively.
0
Single Sideband Noise Power = -40.0 dBc; RMS Phase Noise = 0.57°
-20
-40
LEVEL (dBc)
-60
-80
-100
-120
-140
0.01 0.1 1 10 100 1000 10000
FREQUENCY (kHz)
-40
-80
-100
-120
-140
0.01 0.1 1 10 100 1000 10000
FREQUENCY (kHz)
Modulator
Modulation is accomplished directly at the S-Band transmit frequency through the use of a
Vector I & Q Modulator. The modulator provides two bipolar data input channels for
QPSK modulation or it can be used as a BPSK modulator by applying the same data input
to both the I and Q channels simultaneously. A block diagram of the Modulator assembly
is shown in Figure 4. A unique circuit exists in each of the data paths after the digital
processing which allows the TTL level I and Q data from the FPGA to be converted into
bipolar waveforms (swinging positive and negative with zero offset), before they are
applied to the I and Q inputs of the Vector Modulator. This amplitude conversion circuitry
also permits fine adjustment of the amplitude and phase balance of the modulator. This
design provides a carrier suppression greater than 25 dB, an amplitude balance better than
0.25 dB and a phase balance of 5 degrees maximum over temperature, sufficient to meet
the TDRSS operational requirements.[2] Furthermore, this performance is maintained for
all data rates from 128 Kbps to 1.024 Mbps.
In addition to the amplitude conversion circuit, a lowpass LC filter permits the frequency
content of the data in each channel to be limited prior to modulation, thus providing
excellent spectral containment. The lowpass filter design has been optimized to reduce
intersymbol interference. A QPSK modulated output spectrum with 1.024 Mbps per
channel is shown in Figure 5. Two bipolar RF amplifier stages are used to provide the
necessary RF gain from the VCO output to drive the Vector I & Q Modulator and two
identical amplifiers stages are also used between the modulator and the SSPA (Solid State
Power Amplifier) driver stages.
POSITIVE
VOLTAGE
REGULATOR
NEGATIVE
VOLTAGE
REGULATOR
I
RF IN VECTOR RF OUT
RF RF
(FROM FREQ. G IN MODULATOR OUT
G G (TO DRIVER ASSY)
SYNTH.)
Q
POSITIVE
VOLTAGE
REGULATOR
NEGATIVE
VOLTAGE
REGULATOR
SPAN
10.00 MHz
Another attractive feature of this design is that it provides the capability to switch RF
power between two output antenna ports without the need for mechanical switching.
Instead, the switching is accomplished by using a 90° coupler to split the power from the
output of the SSPA driver stage into two paths, in each path, an S-Band circulator, pin
diodes and transmission line lengths are used to shift the relative phase of the signals with
respect to one another. Depending on the state of the RF routing commands, the pin diodes
are turned on or off for each path causing a change in that paths relative phase. After these
signals are applied to the final SSPA amplifiers they are input to a second 90° coupler
where they are recombined. Depending on the phase of the two signals the total power will
either be divided evenly between the two coupler outputs or it will be directed out one port
only. Figure 6 shows effect of this switching technique in its basic form with a phase
shifting network in only one of the two RF paths. As the diagram clearly illustrates, all of
the power can be output from a single RF port or it can be divided evenly between ports
based on the amount of phase shift applied.
IN
C C OUT B
² PHASE
P P
L L
R R OUT C
0
-2
-4
-6
-8
-10
INSERTION LOSS dB
-12
-14
-16
Out B
-18
-20
-22 Out C
-24
-26
-28
-30
-32
-34
-36
-20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130
PHASE IN DEGREES
The SSPA consists of two separate assemblies: a driver power amplifier stage and a final
power amplifier stage. All of the amplifiers are operated as class AB devices to prevent
excessive compression which would counteract the effects of the premodulation and
eliminate the desired spectral containment properties.
A block diagram of the SSPA Driver assembly is given in Figure 7. The driver assembly
boosts the 1dBm modulated carrier to approximately 37.1dBm before delivering it to the
final PA assembly. The SSPA driver is implemented with three GaAs FET amplifiers, the
first amplifier provides 14.0dB of gain and uses about 160 mWatts of DC power. The
second driver amplifier delivers 12.75dB and consumes about 1.1 Watts and the third and
final driver amplifier provides an additional 10.5dB gain using 13.5 Watts of DC power.
Isolators are used to provide a consistent matching impedance at the input and output of
each amplifier stage. The isolators yield less variation in performance for changes
temperature, DC voltages and input drive levels.
0.75 dBm 14.75 dBm 14.5 dBm 27.25 dBm 27.0 dBm 37.5 dBm 37.1 dBm
FROM
MODULATOR 1 dBm TO
SSPA
G = 14 dB G = 12.75 dB G = 10.5 dB
IL = IL = IL = IL =
0.25 dB 0.25 dB 0.25 dB 0.4 dB
A block diagram of the final SSPA stages is shown in Figure 8. The RF routing
components mentioned above are also evident in this diagram. The final SSPA consists of
four identical GaAs FET amplifiers, configured as two parallel devices in each of the two
RF routing paths. The finals amplifiers provide 9.6 dB of gain in each path and require
about 44.1 Watts of DC power per pair, resulting in a total of 88.2 Watts for all four
amplifiers.
Three separate DC-DC power converters modules and one EMI filter module are used in
this design. Each of the converters draws its input power through the MIL-STD-461
compliant EMI filter. The filter provides approximately 70 dB of attenuation at 500 to 560
kHz switching frequencies of the converters. The total DC power requirement for the
transmitter is less than 140 Watts and the inrush current is limited to 50 Amps maximum.
Reverse polarity protection diodes are used and transient voltage protection to 200 volts
provided.
CONCLUSION
The T-705 telemetry transmitter provides a reliable, cost efficient alternative to traditional
methods of downlinking data from Expendable Launch Vehicles by utilizing the extra
capacity of NASA's TDRSS network. The unit provides a number of attractive features,
such as high data rate capability, low phase noise and commandable RF antenna port
switching, which make it ideally suited for launch vehicles environments
ACKNOWLEDGMENTS
The author would like to thank all of the members of the T-705 development team: Dave
Aull, Tom Bridgens, John Buergel, Derek Busboom, Mark Dapore, Jon Johnson, Brian
Lucas, Tonya Sandlin, Dan Titus and Tom Woodruff.
REFERENCES
[1] Sklar, Bernard, “Channel Coding”, Digital Communications, Prentice Hall, Englewood
Cliff, NJ, 1988, page 347.
[2] “Performance and Design Requirements and Specification for the Second Generation
TDRSS User Transponder”, STDN No. 203.8, Goddard Space Flight Center, Greenbelt,
MD, 1987, page 3-14.
BANDWIDTH LIMITED 320 MBPS TRANSMITTER
Christopher Anderson
Cincinnati Electronics
Mason, Ohio
513-573-6511
ABSTRACT
With every new spacecraft that is designed comes a greater density of information that will
be stored once it is in operation. This, coupled with the desire to reduce the number of
ground stations needed to download this information from the spacecraft, places new
requirements on telemetry transmitters. These new transmitters must be capable of data
rates of 320 Mbps and beyond.
These constraints have been addressed at CE by implementing a DSP technique that pre-
filters a QPSK symbol set to achieve bandwidth-limited 320 Mbps operation. This
implementation operates within the speed range of the radiation-hardened digital
technologies that are currently available and consumes less power than the traditional high-
speed FIR techniques.
High Data Rate Space Communications, High Data Rate Telemetry Transmitter,
Cincinnati Electronics (CE), Quaternary Phase-Shift Keying (QPSK), Raised-Cosine (RC)
Filtering, Wide Band Data Transmitter (WBDT)
INTRODUCTION
A discussion of high data rate digital radio for spacecraft telemetry downlinks begins with
a description of the relevant parameters. This includes modulation type, modulation
bandwidth efficiency, power amplifier nonlinearity, and the available allocated channel
bandwidth. To pass the greatest amount of data in the allocated bandwidth requires a
modulation scheme that has the maximum bandwidth efficiency for the lowest
implementation complexity. The modulation scheme that has met these requirements has
been Quaternary Phase Shift Keying (QPSK). Along with the choice of QPSK, a further
reduction in required bandwidth has traditionally been obtained through the use of data
pulse shaping. By shaping the I and Q channel data pulses, a more spectrally narrow
modulation can be achieved with little or no degradation to the integrity of the data.
Traditional spectral limiting in a complex modulator usually consists of some form of finite
impulse response (FIR) filtering that achieves a raised-cosine (RC) response [1]. The RC
response allows for pulse shaping that does not introduce degradation of the modulation
through inter-symbol interference (ISI). At high data rates, RC FIR techniques are
computationally intensive and consume prohibitive amounts of power for space
applications. A further complication of the high-speed FIR implementation is the lack of
radiation-qualified devices that will operate at the necessary speeds.
OQPSK modulation is used in the CE high data rate transmitter. This modulation technique
has flight heritage on many previous missions and is supported by commercial off-the-shelf
ground station QPSK demodulators.
To achieve bandwidth limiting of the OQPSK modulation, I and Q channel pulse shaping
is accomplished through a time-domain waveform synthesis technique. This technique is
implemented digitally and operates without adders or multipliers. By considering a finite
length of data bit sequences, a time-domain representation of the data that is effectively
raised-cosine filtered is generated in a table look-up manner. The data form the table look-
up is then D-to-A converted and presented to a standard vector modulator and upconverter
chain. Time-domain waveform synthesis techniques are well-known [2], but until recently
have been used only for low data rate systems.
The waveform synthesis mechanism takes advantage of high speed digital technologies
that have emerged from military programs. These digital technologies have evolved to a
point that they are now viable for commercial space applications in terms of reliability and
of cost. The high speed digital circuits developed by CE meet typical commercial space
environment requirements of temperature, radiation, shock, and vibration. A block diagram
of the pre-shaped high data rate modulator implementation is shown in Figure 1. The
spectral envelope of the transmitter is shown in Figure 2.
RANDOMIZER DISCRETE
ON TLM TELEMETRY
DISCRETE
RANDOMIZER COMMAND
ON CMD PARALLEL TO V.35 DSP WAVEFORM
RECEIVER SERIAL SCRAMBLE WAVEFORM TABLE D/A
CONVERTER LOOKUP
I DATA (16)
VECTOR TO
SSR CLK MODULATOR UPCONVETER
CSS CLK RS-422
RECEIVERS/DRIVERS
AND PA
AND DATA/CLOCK SYNC
OPTIONAL FEC PARALLEL TO V.35 DSP WAVEFORM
Q DATA (16) SERIAL SCRAMBLE WAVEFORM TABLE D/A
CONVERTER LOOKUP
SSR CLK DATA CLK
CSS CLK
COHERENT
REFERENCE CLK
FIGURE 1 - MODULATOR
DIAGRAM
The interface to the high data rate transmitter consists of multiple RS-422 lines. This
configuration is an efficient method for high data rate interfacing and is compatible with
the latest solid-state recorders. Various forward error correction (FEC) schemes (Reed-
Solomon, Convolutional) are optionally available in the interface to meet specific link
requirements.
FIGURE 2 - TRANSMITTER SPECTRAL ENVELOPE
The method with which the modulation is amplified is as important as the modulation
itself. The power amplifier must leave the amplitude and phase characteristics of the
modulation undisturbed. This is accomplished in the CE family of high data rate
transmitters through circuit design techniques that serve to minimize distortions associated
with the power amplifier. These techniques include the use of constant envelope OQPSK
modulation as well as clever device biasing techniques. As always, careful attention is paid
to the isolation of circuit stages and the effects of spectral re-growth. Much of the inter-
stage isolation in the transmitter is achieved through the use of ferrite circulators.
Currently, power amplification of up to 6 watts is available for the X-band channel from
CE. This output power supports high data rate, full-orbit LEO spacecraft telemetry
downlinks with earth stations down to 3 meters with considerable link margin. A typical
power amplifier configuration is shown in Figure 3.
-7 dBm -7.4 dBm 1.5 dBm 1.1 dBm 9.1 dBm 8.7 dBm 16.7 dBm
IL = .4 dB IL = .4 dB IL = .4 dB IL = .4 dB
16.3 dBm 24.3 dBm 23.9 dBm -31.9 dBm 38.9 dBm 37.8 dBm
PA OUTPUT
X-BAND CHANNEL
The X-band channel (8.0 to 8.5 Ghz) is currently the most popular choice for high data
rate space telemetry downlinks. This is due to the high available bandwidth in the X-band
channel, link considerations associated with the ground stations, and the ease of securing
channel licensing. In this X-band channel, a maximum data rate of approximately 320
Mbps is possible using QPSK modulation.
Of concern to the link designer is non-interference requirements with NASA’s Deep Space
Network (DSN) in the band from 8.4 to 8.45 Ghz. CE guarantees non-interference with the
DSN by implementing a notch filter at those frequencies. This notch is implemented at the
power amplifier output to ensure its effectiveness.
FREQUENCY RE-USE WITH REDUNDANCY
The increasing interest in the commercial imaging satellite business has seen the need for
higher downlink data rates. This has been driven by not only the amount of image data
being captured, but also by the desire to lower system costs by deploying fewer ground
stations. Downlink data rates to 500 Mbps and beyond are now in demand. These data
rates cannot be achieved through one QPSK transmitter in the allocated X-band
bandwidth.
An efficient, low-cost solution to meeting these higher data rates in the X-band channel is
to implement two simultaneous bandwidth limited transmitters in two separate orthogonal
downlink polarizations. A diagram of this frequency re-use technique is given in Figure 4.
This technique uses the 320 Mbps transmitters already in production at CE and achieves a
downlink data rate of 640 Mbps. The 640 Mbps downlink data rate is achieved through
the minimal addition of a data multiplexer and an RF switch.
32 RF SWITCH
REAL-TIME RHCP
OR SSR WBDT #1
INPUT 320 MBPS
M 32 LHCP
U WBDT #2
X 320 MBPS
REAL-TIME
OR SSR WBDT #3
INPUT 320 MBPS
32
CONFIGURATION
CONTROL
The configuration shown also allows for a level of redundancy that is usually desired by
the spacecraft manufacturer to ensure data throughput (640 Mbps) in the event of a single
transmitter failure. It should also be noted that a dual failure mode in this configuration
results only in a reduction of data rate to 320 Mbps and not a total mission failure.
CONCLUSION
320 Mbps transmitters are being delivered by Cincinnati Electronics. These transmitters
take advantage of a new class of digital filtering that optimizes QPSK modulation and
achieves the greatest possible throughput in the X-band channel. As desired data rates
continue to rise, these transmitters are playing an important role in the earth imaging and
remote sensing commercial satellite systems.
REFERENCES
Douglas O’Cull
Microdyne Corporation
Aerospace Telemetry Division
Ocala, Florida USA
ABSTRACT
This paper will discuss the use of a specialized telemetry signal simulator for pre-mission
verification of a telemetry receiving system. This will include how to configure tests that
will determine system performance under “real time” conditions such as multipath fading
and Doppler shifting. The paper will analyze a telemetry receiving system and define tests
for each part of the system. This will include tests for verification of the antenna system.
Also included, will be tests for verification of the receiver/combiner system. The paper will
further discuss how adding PCM simulation capabilities to the signal simulator will allow
testing of frame synchronizers and decomutation equipment.
KEYWORDS
INTRODUCTION
In the past, providing a simulation device with these capabilities would have required
several pieces of test equipment and significant man hours to configure the system for
testing. This paper will describe the design and applications of a simulation system that
includes capabilities that allow the user to simulate Doppler shift, dynamic fades and PCM
data streams. The simulator provides high output power to allow use as a boresite
transmitter to test the entire receive system. Also, it provides complete remote control to
allow the user to automate pre-mission testing. The simulator provides multiple modulation
modes and a large tuning range that meets or exceeds all requirements of today’s telemetry
systems. Finally, the paper will provide application examples for simulation configurations
that ensure proper operations of the receive telemetry system.
DESIGN CONSIDERATIONS
A signal simulator was designed to meet the requirements for complete telemetry
simulation. The simulator employs a combination of digital and RF design techniques to
provide the simulation capabilities needed for telemetry receive systems. A review of the
block diagram, Appendix 1, will highlight the various features required for system
simulation.
The Digital Waveform Generator serves as the digital modulation source. It is a high speed
discrete digital system clocked at 150 MHZ. By using a digital source, the simulator can
easily support multiple modulation formats such as AM, FM and PM. By adding a limited
amount of RAM, it can also provide PCM simulation and support PCM/FM & BPSK
applications. In addition, a pseudo-random number generator has been included which
provides compatibility with industry standard Bit Error Rate Test Sets. The Digital
Waveform Generator also includes a Modulated Numerically Controlled Oscillator
(MNCO) which allows the simulator to provide small tuning increments, less than 1 Hz,
and further serves as a device to provide Doppler shift simulation. A block diagram of the
Digital Waveform Generator is included as Figure 1.
The simulator RF path is based on a three synthesizer conversion process. This allows the
simulator to provide a wide range of output frequencies. Currently the design provides RF
tuning from 10 MHZ to 600 MHZ and from 1400 MHZ to 2500 MHZ. This allows the
simulator to cover the standard telemetry bands, such as P, L & S bands. It also allows it
to cover the command destruct bands. Furthermore, because it provides RF outputs as low
as 10 MHZ, it can be used as an IF source. Finally because all of the local oscillators are
synthesized, the simulator can provide sweep capabilities.
The two RF output channels are then routed through high power RF amplifiers and
digitally controlled attenuators. This allows the RF output power to be varied from -130 to
+20 dBm. This will allow the simulator to be used for boresite applications.
All of the features, of the simulator, are remote controllable using IEEE-488, RS-232 and
RS-422 interfaces. Because the simulator allows complete remote control of all
configuration parameters, the user has the capability to create highly sophisticated
simulation scenarios that will verify complete system performance.
Perhaps the most common uses for the simulator can be found by looking at the Range
Commanders Council’s Test Methods for Telemetry Systems and Subsystems. The
simulator can provide most of the RF generation requirements for tests outlined in this
document. In addition, this simulator will eliminate the need for some additional test
equipment required by this document. Figure 2 shows the simulator configured to perform
receiver/combiner testing.
Figure 2 - Receiver/Combiner Test Configuration
In this case the simulator provides all of the signal requirements to dynamically test the
combiner system. Without the simulator, the user would have to provide an external power
splitter and fade simulator, along with associated equipment, to calibrate them. By
including them in the simulator, testing errors due to improper test setup will be reduced.
The system simulation equipment can be greatly reduced when using the simulator as a
boresite transmitter. Because the simulator has an internal pseudo-random number
generator and a PCM data simulator, the user can now perform bit error rate (BER) testing
through the complete downlink without the expense of an additional BER test set at the
boresite location. Additionally, the +20 dBm output power allows the simulator to be used
as a boresite transmitter without the need for an additional RF amplifier. The capability to
perform these tests throughout the complete receive system allows the user to verify
operation of the complete receive system while keeping simulation cost to a minimum. A
boresite system also allows tracking antenna calibration, the ability to perform boresite
“snap on” tests and perform optical camera alignment.
The simulator’s fade and Doppler shift simulation capabilities provide the user with the
devices to fully simulate dynamic vehicles. The Doppler shift parameters are
programmable, allowing the user to specify the Doppler shift range and the rate of change.
This will simulate the frequency shift experienced by a moving vehicle. The simulator can
also be made to dynamically fade the RF output levels. By specifying the fade depth, rate
and phase, the user can simulate any degradation of signal experienced from multipath
fading, vehicle maneuvering and dropouts caused by flame attenuation. These functions
allow the system to be tested from the optimum conditions to the worst conditions.
By using a simulator, combined with current computer technology, a complete system test
configuration can be created. Figure 3 shows a complete receive system configuration that
will provide all the simulation functions required to verify operation of the receive system.
In this scenario, a computer could control the equipment and perform various simulation
exercises. It could configure the simulator to output a pseudo-random bit pattern and then
obtain the BER results from the BER test set at the receive site. The computer could
download data into the simulator via remote control and then monitor the receive data.
Comparisons of the downloaded data versus the received data would indicate system
performance. Finally, for overall testing of mission data a playback of previous mission
data could be externally routed to the simulator and this could be transmitted from the
boresite location.
CONCLUSION
The Microdyne TSS-2000 provides all the simulation capabilities necessary to provide
pre-mission verification of the receive system. The multi-modulation and multi-frequency
capabilities, combined with the Doppler shift, fade simulation and PCM simulation allows
the user to test the receive system under the best conditions and the worst conditions.
These features combined with the high output, power allows the simulator to be used for
complete simulation, including boresite testing of the entire RF downlink.
REFERENCES
ABSTRACT
It is well known that the pulse telemetering system whose system equippment is simple is
superior to the continuous one in ultilizing signal power. But in designing a pulse
telemetering receiver the freqency shift problem is often encountered, the shift often
greatly wider than the signal bandwidth is very unfavorable for improving receiver working
sensitivity. Either to limit transmitter freqency stability strictly or to adapt AFC system in
receiver for tracking carrier wave can solve the problem above, the AFC system method
could improve the receiver’s performance, but the equippment is complicated. To what
extent the receiver working sensitivity will be effected and how to judge the effection in
case of adapting VF matched filter and RF being wideband in receiver are this paper’s
emphasis. In this paper the power density spectrum distribution of the white noise which
has passed through the non-linear system-the linear detector is analysed theoretically, and
the improved working sensitivity of the receiver with video matched filter and its
difference sensitivity value to that of the optimal receiver are derived. The tested working
sensitivity data of two kind pulse receivers with different RF bands are given and the
theoretical calculation results conform well with these data, thus it is proven that adapting
video matched filter in pulse receiver is a effective approach for compensating the receiver
working sensitivity dropping from RF bandwidth increase.
KEY WORDS
The modulated pulse signals received with pulse receiver features very short in signal-
lasting time and wider pulse spacing. For analysis convenience, take the signal existing
time domain as signal domain, and the pulse spcing as no-signal domain. As to the signal
domain, like amplitude modulation receiver, the pulse receiver’s working threshold is low,
so when RF signal-noise ratio(SNR) is great, the linear detector’s output SNR is the same
as the RF SNR before being detected, that is its noise amplitude is approximately a normal
distribution. But as to no-signal domain, how does the linear detector’s distribution noise
power density spectrum distribute and how it effects the receiver’s working sensitivity? it
is known that when video freqency bandwidth is too wide, the no-signal domain noise and
false pulses increase, this will greatly effect the receiver’s working sensitivity; but when
the VF bandwidth is too narrow, the signal power loss increases and therefore the
possibility of signal pulses leakage increases too; so we must derive out the noise power
density spectrum theoretically to choose proper VF bandwidth to make the receiver to
work in the optimal state
As for non-linear system (detector), by the correlation function, the noise power density
spectrum can be derived conveniently. If the signal is zero, the detector’s output
correlation function is:
is the detector’s input noise dual variant distribution function, and F2 is the noise power
mean square deviation, and R = R0(J)cosTJ is the correlation function.
,,
then
(1)
Where
for
by the double angle formula of the triangle function, B0 (J) can be derived as:
(2)
Where in equality (2), the first and the second term are the DC part and the LF part
respectively.
Suppose the linear detector’s input noise power density spectrum distributes uniformly in
the IF amplifier bandwidth B,
by convolution theorem :
(3)
(3) is the linear detector’s output noise power density spectrum, where B is the RF
(4)
where KT is the noise power per unit band . assume the VF filter’s band is BV and its
freqency chracteristics is:
(5)
The evolution of the through-the-non-linear system correlation function of the signal with
noise is very complicated and it is omitted here.
Suppose the received RF modulated pulse is rectangular and the pulse envelope’s power
density spectrum ditribution is :
where J is the pulse width, if the RF bandwidth is B and the amplification is A, then the
detector’s input power is:
(6)
(7)
(8)
Assume the identification coefficients of the receiver terminal and the linear detector are
D1, D0 respectively. For D0=1/ "D1, then the receiver’s working sensitivity is:
(9)
where in (9) NF, BN are the receiver’s noise figure and the noise equivalent band width
before being detected from the equation above, if the receiver’s RF bandwidth )fN, VF
bandwidth )fV, and the terminal identification coefficent are known, then the working
sensitivity can be calculated. When )fN$ 1.59/ J , )fV$ 0.8/ J , the greater is " , the
greater is the improved receiver’s working sensitivity. If the RF bandwidth and VF
bandwidth match the signal band width, then working sensitivity reaches its maximum
value. From (8), there are:
(10)
2. Experimental results:
Suppose Model E101 Model E102 are microwave pulse receivers, their major parts
parameters and their calculated )Prmim are listed in table-1
The received modulated pulse width J =0.7us, and D1=16.5B, according to theoretical
calculation, the IF matched bandwidth ) Fmat=1.6/ J =2.4MHZ , the VF matched
bandwidth ) Fvmat=0.8/J =l.2MHZ. In experiments, Model E101, Model E102 receivers with
the same modulated pulse width J =0.7us, the same RF and VF amplification circuits but
with greatly different mixers and IF amplification bandwidth, are tested many times. The
Model E101 receiver’s working sensitivity is Prmim=-121.2 dB and The Model E102 receiver’s
working sensitivity is Prmim= -120.5dB (leakage. false pulse number is 0~2/s and the
terminal identification coefficient D1 is about 16.5dB).
Table - 1
IV. CONCLUSION
This paper has analyzed the power density spectrum of the white noise passing through the
non-linear system and derived the pulse receiver’s working sensitivity formula (9) from the
signal charateristics. If the receiver’s noise coefficient, the RF bandwidth before being
detected, the VF low-pass filter band width and the terminal identification coefficient are
known, then the working sensitivity can be calculated. From the experimental results, it is
known that although the two receivers’ IF bandwidth are greatly different, the VF
bandwidth matches the received pulse signal spectrum so their working sensitivity are
approximate which conforms well with the calculated values.
REFERENCE
Raymond J. Faulstich
Lawrence W. Burke Jr.
William P. D’Amico
ABSTRACT
The Army development and test community must demonstrate the functionality and
reliability of gun-launched projectiles and munitions systems, especially newer smart
munitions. The best method to satisfy this requirement is to combine existing optical and
tracking systems data with internal data measured with on-board instrumentation (i.e. spin,
pitch, and yaw measurements for standard items and terminal sensor, signal processor, and
guidance/navigation system monitoring for smart munitions). Acquisition of internal data is
usually limited by available space, harsh launch environments, and high associated costs.
A technology development and demonstration effort is underway to provide a new
generation of products for use in this high-g arena. This paper describes the goals,
objectives, and progress of the Hardened Subminiature Telemetry and Sensor System
(HSTSS) program.
KEY WORDS
INTRODUCTION
The HSTSS program is a joint effort involving the U.S. Army Yuma Proving Ground
(YPG) and the U.S. Army Research Laboratory's (ARL) Weapons Technology
Directorate. This effort centers on the identification and demonstration of a new generation
of technologies to support airborne, gun-launched munitions test measurements. Army and
OSD Test Technology Development and Demonstration (TTD&D) funding initiated the
program. Central Test and Evaluation Investment Program (CTEIP) and Army Major
Technical Test Instrumentation investment programs presently fund the HSTSS effort in
the Concept Exploration and Definition phase of the acquisition cycle, targeting
Engineering Manufacturing Development (EMD) starting in FY98.
BACKGROUND
Consider the development and testing of a kinetic energy projectile. Here is an extreme
example for the application of on-board instrumentation. The sabot structure must support
the long slender rod during travel down the bore and release the penetrator into free flight
to assure a high probability of first-round hit. If accuracy or lethality deficiencies occur,
what are the means by which test and development engineers identify the root cause? If
recently manufactured ammunition does not initially pass a lot acceptance test, what are
the testing options?
Typical external measurement systems such as target cloth screens, yaw cards, high-speed
video or framing cameras, x-ray cassettes, and radar may not provide sufficient detail
about projectile angular motion and spin. Most of these data are only available at discrete
locations, rather than in a continuous stream. Additionally, the initial motion of the tank
gun and the exhausted propellant gases yield difficult conditions for external measurement
systems at launch where the initial conditions of the projectile are established. Labor-
intensive setups and hand reductions of yaw card and target cloth data can be eliminated
through the use of on-board instrumentation. If the flight history is normal, then the
penetrator slices through the target cloth and yaw cards, leaving neat patterns reflecting the
six-bladed fin set. What if the target cloths reveal anomalous patterns? Our present test
technology leads to “forensic engineering” or guessing at both the problem and the
solution. Lengthy and expensive retesting can occur.
In-flight measurements for high-g systems have not been the rule-of-thumb in the past.
Continuous measurements of munitions attitude, pressure, temperature, and vibration are
examples of critical test data that will guide the successful development and type
classification of new munitions, especially smart munitions. These flight data are extremely
important in view of the reliance on smart munitions and missiles where internal functions
can only be determined through on-board flight instrumentation. These flight data
(gathered in a so-called “cooperative mode”) should then be convolved with the standard
ground based (so-called “noncooperative mode”) data. In-flight measurements for smart
munitions, direct- and indirect-fire munitions, missiles, and rockets can be made routine
and cost effective with the use of new technologies, many of which are leveraged from
Defense Advanced Research Projects Agency (DARPA) investments and commercial-off-
the-shelf (COTS) products.
TECHNOLOGY SUMMARY
The HSTSS program has been developing a new generation of measurement technologies
for projectile and smart munitions testing. This program is unique in that all key aspects of
an airborne measurement system (transmitter, power supply, sensors, electronics, and
packaging) are addressed in one effort. A conceptual HSTSS measurement system,
denoting key subsystems, is shown in a cylindrical piggy-back stack in Figure 1. The stack
of subsystems could be customized to include different measurement capabilities required
of any particular test.
The HSTSS demonstration technologies were based in four areas: sensors, packaging,
telemetry, and power supplies. Data acquisition, encoding, and other supporting
electronics were confidentially omitted during the TTD&D phase as lower risk
developments. The seemingly restrictive and difficult requirement of high-g survival (and
at times operation during high-g) provides a foundation leading to miniature and
inexpensive technologies. There are a few basic principles to high-g survival—small wires
and connectors, plastic vs. ceramic parts, encapsulation of heavy or unsupported
structures, etc. However, the best design principle for high-g is simply size and mass—
miniature is best. This naturally leads to electronics and sensors that are produced using
practices that are common to integrated circuits. The use of bare IC die and
microelecrtomechanical (MEM) sensors should naturally lead to small and configurable
designs that would lead to high-g survivability. T&E measurement systems are often
unique, both in measurement capability and geometric configuration. Often the need, in
quantity and/or time to implementation, dictates a small scale prototyping operation. The
integration of sensors, conditioning electronics and data acquisition systems, telemetry
devices, and power supplies into a functional measurement system requires adaptable and
configurable technologies.
SENSORS
HSTSS has focused on highly miniature sensors, many of which are leveraged from the
DARPA MEM programs. The MEM sensor is produced by etching processes that were
derived from the microelectronics industry.
A commercially available $30 air-bag accelerometer from Analog Devices is a surface
micromachined silicon device integrated with electronics into a single package. This
accelerometer has been flight tested as a drag sensor on a 2.75-inch rocket [1]. The in-
flight accelerations, when integrated within a trajectory code, provided a consistent
comparison with radar data as shown in Figure 2 [2]. The device, when unpowered, has
survived ground test accelerations of 60,000 g’s. Since an internal calibration and turn-on
cycle consumes only 2 to 3 milliseconds, this device should survive a gun-launch in an
unpowered mode then be powered up subsequent to muzzle exit to provide a down range
sensor capability.
100
60
40
20
-20
-40
0 2 4 6 8 10 12 14 16 18
Time (s)
A MEM sensor development program by the Charles Stark Draper Laboratories and
Rockwell Autonetics is providing the ability to measure angular accelerations with a
silicon gyroscope. This sensor has survived accelerations common to guns when
unpowered. Integration of the conditioning electronics with the sensor die is presently
underway. Initial applications of this gyro will be in the automotive industry for automatic
braking systems. The HSTSS program has been cited as a Technology Reinvestment
Program consortia partner with the Rockwell/Draper team in a recently funded DARPA
program. This effort will be leveraged by the HSTSS program to merge the low-g
technology into high-g applications and products.
Munitions spin for aerodynamic stability or accuracy, but the presence of spin can disrupt
the tracking of angular rate sensors and linear accelerometers whose output is integrated
into velocity and position data. An inertial spin sensor technology from Sensor
Applications that utilizes a material whose resistance changes in the presence of a
magnetic field has been evaluated and tested for HSTSS application. As this element is
rotated across the earth’s magnetic flux lines, the change in resistance can be amplified
and conditioned to produce spin data. The raw die is less than 0.1 inch in length and is
currently being integrated with post conditioning circuitry into a shock resistant plastic
package. Unpowered, these sensors have survived airgun testing of 110,000 g’s [3]. When
flight tested in a 155-mm artillery projectile, the fully-powered sensors survived a
12,000-g launch. Figure 3 shows spin sensor data compared to “standard” yawsonde data.
Accuracies better than 0.1 percent are achievable with this sensor.
125
Yawsonde Data
SCSA50 Magnetic Spin Counter
120
115
110
105
100
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
Time (s)
HSTSS is developing and testing several power cell technologies for high-g use. Solid
polymer electrolyte, lithium-ion power cells from Ultralife Batteries (UK) have been under
evaluation [4]. These solid-state polymer batteries (nominal 4 V) are rechargeable,
lightweight, physically flexible, and environmentally friendly. Cells can be made to
conform to almost any user shape or configuration. They can be layered together and
connected in parallel and/or series to provide a complete battery system. Ultralife is under
contract to modify its commercial cells for the gun-launched environment. Single-cell
configurations have survived shock accelerations of more than of 110,000 g’s and
centrifugal tests at 300 rev/sec, yielding radial accelerations of 24,000 g’s. Research,
sponsored by DARPA, is currently being performed to increase the energy density and
temperature performance of the cells. Primary power cells, also available from Ultralife,
offer similar form factor characteristics with even higher energy density and are currently
being evaluated for high-g applications.
Multilayer printed circuit board technology is the most common format for high-density
packaging of electronic components and circuits. Higher density packaging techniques
exist, but they involve long and costly design and manufacturing processes. Two
commercially available multi-chip-modules (MCM) technologies have been under
evaluation as packaging alternatives [5]. Both of these MCM technologies are electrically
programmable and allow for the direct placement of raw die onto a premanufactured
substrate. One substrate technology by PICO Systems uses an antifuse technology and is
silicon based. Circuit connections within the substrate are programmed by disabling the
antifuses to produce internal circuit paths or vias. The die are then attached by normal wire
bonding techniques to the substrate. A PICO Systems MCM package has been flight
tested at an acceleration level of 20,000 g’s with both analog and digital components as
part of the measurement system.
Both of these technologies are cost effective as low-volume (several hundred units), high-
density MCM schemes. In both cases, the substrates are premanufactured and stocked.
They do not require lengthy fabrication/assembly procedures, or expensive masking or
clean room processes. Electrical characterization was conducted on these two substrate
technologies. Measurements for cross-talk and frequency response were made [6].
Conclusions as to a “best” substrate should be reserved on the basis of design/fabrication
time, cost, and higher shock survivability. Each technology has distinct advantages.
TRANSMITTER
Multiple solutions to the data transmission problem are being studied. Technical, “legal,”
and fiscal factors each contribute to the resolution of this issue. Existing range
infrastructure and frequency allocations support the use of IRIG compatible telemetry
systems. The high-g and small size requirements generally eliminate existing IRIG
compatible solutions. Previous work investigated promising alternatives [7,8]. Further
investigation into additional beneficial solutions is being pursued.
The portable communications industry is rapidly developing new devices and products.
Wireless communication systems, local area networks, cellular phones, and mobile links
are common at frequencies not permitted on test ranges. Existing commercial
communications technologies are being studied for application at L- and S-band
frequencies. Preliminary evaluations indicate these technologies can be made compatible
with the IRIG standards of frequency allocation, stability, and bandwidth. Link budgets
and analyses are underway to complete this preliminary survey.
TEST SUMMARY
Testing of candidate technologies for application in the HSTSS scenario focused primarily
on the launch and flight condition operation/survivability. Risk reduction in the TTD&D
and CED phases of the program concentrated on downselecting to technologies which
showed promise of success in the high-g and high-spin environments. Survivability
performance criteria for a given device were developed to ascertain the benefits of
continued product development. In general, test articles were subjected to shock, spin,
high-g acceleration, and flight condition tests.
An impact shock test machine is used to screen potential devices for use in high-g
telemetry applications. The shock pulse duration are typically on the order of 5 to 10
microseconds for decelerations in the 20,000- to 30,000-g range. It should be noted that
normal gun launch accelerations have durations of several milliseconds. Spin testing is
accomplished with an air-cooled cantilever roll-drive mechanism. Roll rates on the order of
300 Hz are achieved. A 4-inch helium-driven gun at ARL-Adelphi is used for setback
acceleration simulations. A mitigator controls and shapes deceleration forces in the
30,000- to 100,000-g range at impact. The Arnold Engineering Development Center,
3.3 -inch two-stage light-gas gun provides high-g (>100,000-g achievable) setback
accelerations. This gun achieves its acceleration forces during the launch cycle. Flight
tests, using specially modified projectiles and currently available telemetry components,
have been conducted at the ARL Transonic Range. The fully instrumented projectiles
carried candidate technology devices and achieved launch accelerations near 20,000 g’s
and spin rates near 50 Hz for the 30-second flights.
PROGRAM DIRECTION
CONCLUSION
The authors would like to thank Mr. John Schnell of Headquarters, U.S. Army Test and
Evaluation Command (HQ TECOM) for his management and guidance provided during
the TTD&D program. The authors would also like to thank the members of the Advanced
Munitions Concepts Branch of ARL for their electronics design, fabrication, simulation,
and data reduction support.
REFERENCES
[2] Harkins, T. E., Davis, B. S., and Hepner, D. J. “Using Accelerometers for Velocity
and Position Estimation: Results from a 2.75-Inch Rocket Flight Test.” ARL
Memorandum Report 305, Aberdeen Proving Ground, MD, June 1996.
[3] Davis, B. S., Burke, L., Erwin, E., Myers, C., and Mitchell, C. “High-G Air Gun
Testing of Hardened Subminiature Telemetry and Sensor System (HSTSS) Devices.”
ARL Memorandum Report 306, Aberdeen Proving Ground, MD, June 1996.
[4] Burke, L. W., Faulstich, R. J., and Newnham, C. E. “The Development and Testing
of a Polymer Electrolyte Battery for High-G Telemetry Applications.” Power Sources, 15.
Research and Development in Non-Mechanical Electrical Power Sources, pp.295-313,
April 1995.
[5] Burke, L., Ferguson, E., Alper, D., Banker, J., and Miller, R. “Programmable
MCMs for Low Volume High-G Applications.” (PICO Systems Contract DAAD01-94-
C-0005.) 1995.
[6] Borgen, G. S., “Multi-Chip Module Frequency Response and Cross Talk Testing.”
Naval Air Warfare Center, Weapons Division, Point Mugu, CA, 15 March 1995.
[7] Burdeshaw, M. R., and Clay, W. H. “Subminiature Telemetry Tests Using Direct
Fire Projectiles.” BRL-MR-3893, U.S. Army Ballistic Research Laboratory, Aberdeen
Proving Ground, MD, February 1991.
[8] Ferguson, D., Meyers, D., Gemmill, P., and Pereira, C. “A Monolithic High-G
Telemetry Transmitter.” Proceedings of the International Telemetry Conference, pp. 497-
505, October 1990.
THE SPACE IMAGING OPERATIONS CENTER
ABSTRACT
KEY WORDS
INTRODUCTION
There are two ways to design an imaging system. The first - which E-Systems
followed for decades - is primarily for government agencies. Sophisticated
customers know what they want, understand the technologies, and specify
systems that will accomplish clearly defined requirements. They were able to
presume a knowledge base, and concentrate on how the systems perform.
In the commercial world, most users clearly understand the goals and
technologies of their own disciplines. The geologist understands structure,
landforms and lithology, the agronomist knows soils, nutrients and crops, and so
forth. These users often do not understand imaging, image processing, or
electronic systems generally. While they are interested in the general sense, they
have neither the need nor the desire to become systems engineers or image
scientists. For Space Imaging to prosper in the commercial market, we had to
appreciate several things:
í These customers prefer not to shop around and assemble a system. They
want to pick the parts they need, with the capabilities they require, from a
single provider - and they want that provider handy if they have problems.
î They have enough concerns with the intricacies of their own disciplines and
don't need their lives made any more difficult. Imagery and imaging systems
are tools used for other ends. Ease of use, intuitive interfaces and on-line
help are important [especially on powerful, multi-use systems].
END-TO-END SYSTEMS
The Space Imaging Operations Center provides an end-to-end system solution for
commercial imaging. Data from Space Imaging plus other current and proposed
imaging satellites, airborne electro-optical collectors, and hard-copy digitizers can
all be processed, managed, analyzed, fused and exploited in a single installation.
The Operations Center is a family of commercial hardware and standards-
compliant software modules that can be configured in installations for a wide
variety of sources, sizes and applications. Being modular, an installation can be
configured to handle all, or any portion, of the remote sensing sequence from
collection planning to product generation. An installation can be reconfigured,
expanded or upgraded with minimum impact on existing resources as
requirements change or new data sources become available.
The user has a simple graphic tool to define the imaging opportunities on a given
satellite pass, identify and prioritize the potential targets, and generate a tasking
order that will make the best use of each minute of collection time. This problem is
complicated by the fact that the Space Imaging satellites will be able to collect in
several different modes including 11 km square points, strips, and single-pass
area collection - either monoscopic or stereo. The sensor must complete one
collection, re-orient for the next target, stabilize, and begin the next collection. The
time required for all elements must be included in each tasking package.
Task planning can be done on a standard UNIX workstation - one of the image
processing workstations or a dedicated unit. The displays are generated using E-
Systems tasking software and geographic, target and ephemeris information in
the installation database. When the tasking package is complete, it can be loaded
directly into the Space Imaging satellite from the Operations Center. For other
platforms the package can be sent to the respective control facility.
Commercial antennae and support installations are sized to fit each customer's
needs. Many Operations Center installations can be well served by a 6 meter
antenna that is both very cost effective and facilitates portability and easy
installation. For a larger, fixed, site, more permanent antennae up to 9 or 11 m in
diameter allow collection from weaker sources closer to the horizon, even in
inclement weather. Whatever the size, each receiver element must:
Data are saved at level Ø - raw digital pixels assembled into UNIX image files.
The standard image processing steps are fully automated and real-time, so the
processing burden when a product is created is measured (at worst) in
milliseconds. By starting from level Ø each time a CARTERRATM/SM product is
created we insure that the source data are not corrupted, and eliminate any
degradation from repeated resampling. This also insures that any new or
improved processing algorithms will have clean data from which to start for the
highest possible product quality.
At the same time, the system can be configured to provide near-real-time waterfall
displays from incoming imagery. These have automatic de-banding, glitch
correction, and cloud cover measurements provided, and are presented to the
user as the satellite acquires the images.
14 45 k
m
A CCE
S
COLL S IBLE
EC T
REGI ION A
ON
C
B
HIGH PRIO RITY PO INT
LOW PRIORITY P OINT
STE RE O ARE A
km
3 700
E
F
G H
S ATE
LL
GROU ITE
N
T RAC D
K
Sources include Space Imaging plus both the existing and other proposed
commercial imaging satellites, and various kinds of airborne imaging systems.
The receivers and other components will be able to handle and process incoming
data at rates up to several hundred megabits per second in real time, so will not
be obsolesced by the next generation of collectors.
Any installation is able to accept data from multiple sources, given compatible
transmission frequencies and necessary decoding information. Like the rest of the
system, this element is modular. If a system is originally configured for SPOT it
can be upgraded to accept LANDSAT and, later, Space Imaging data by the
addition of software and, occasionally, hardware modules. In place of commercial
products for demodulation, demultiplexing and frame grabbing, E-Systems is
designing a less expensive and more capable board that will be installed in the
server bus to handle multiple sources interchangeably.
MISSION SERVER
ALGORITHM DEVICE VOL
RDBMS GIS
LIBRARY DRIVERS SERV
UNIX
DATABASE
Mission server receives image files from the Receipt and Generation element and:
The system database is similar in function to one E-Systems has provided for a
U.S. government oceanographic/hydrographic library. It is geographically based to
show all products - coded by type, age and other attributes - on a scaleable map
display for initial selection. (Alternatively, users may make requests by place
name, area corner coordinates or other keys.) Specific products can then be
selected by date of collection, platform, nature, scale, resolution and other factors
(additional criteria can be added to support specific applications). Data files are
presented with their associated metadata and selected support data (such as map
displays with imagery). When products are generated, they are logically linked
with the source data files as well as the standard database criteria.
The Space Imaging Operations Center and Mission Server can support a wide
range of commercial software packages and applications, including the multi-
source data visualization and analysis software developed by E-Systems in their
Open-Architecture Scientific Information System (OASIS) research and
development effort. OASIS is a reusable software foundation integrating a suite of
modular software components for spatial data management, processing, and
visualization of data from many sources. It is applicable to a broad range of image
analysis and cartographic applications including (civil and defense) mapping,
contingency planning, resource management and research.
One of the great strengths of the E-Systems image software is that it allows the
analyst to display and analyze registered suites of data from multiple sources.
Each data plane can be manipulated, either automatically or interactively, alone or
in concert with the entire display - while maintaining registration, orientation, scale
and position.
The software supports stereo display windows, terrain elevation extraction and
interactive three-dimensional mensuration and analysis. A complete imagery and
graphics toolkit and a single application interface provide maximum ease of use.
The sophisticated user has a wide range of powerful tools and manipulations
immediately available, while selected subsets can be provided for routine
analyses and production from standard data suites. Applications can be tailored
for specific user requirements, and simplified to the maximum possible extent.
Figure 4. Fused data visualization
The Operations Center software includes a wide range of commercial products for
specific needs and applications. The Mission Server™ environment is based on
open-architecture industry standards, and is compliant with:
Product Generation
Mission Server includes the device drivers, and more can be added.
For multi-user installations, or those who choose to market their data and
products, a business and support segment can be incorporated into the
Operations Center. This sophisticated customer interface supports on-line user
libraries with full search functionality (security provisions can limit search criteria
for specified classes of customers) and reduced resolution "snapshot" images,
order receipt, task scheduling, account maintenance, and bookkeeping functions
such as query response, progress reporting, billing and shipping. It is also the
center for training, logistics and support for the entire Center. This customer
service functionality is a tailored application on Mission Server, and fully
interactive with the rest of the system.
Scaleability and Expandability
All Operations Centers are transportable, in the sense that they can be dismantled
and relocated. Where relocation is likely, the modules can be kept integral for
ease in movement. In this regard, a station could be broken down, moved a
considerable distance, and returned to service in a matter of days.
CONCLUSION
Lee H. Eccles
Boeing Commercial Airplane Company
P. O. Box 3707, M/S 14-ME
Seattle, Wa 98124-2207
ABSTRACT
This paper discusses a “Smart Sensor” interface being developed for use in the Boeing
Company. Several laboratory groups and Flight Test have joined in a study to define such
an interface. It will allow a data acquisition system to record data from a large number of
“Smart Sensors”. A single pair of wires will form a bus to interface the sensors to the data
system. Most systems will need more than one bus. Some needs exist for "Smart
Actuators" as well to allow for closed loop control within the laboratories. The process
control industry has developed several candidate busses. The groups are now in the
process of evaluating the capabilities of the available busses to see which ones, if any, will
do our job. To see if anyone else has similar needs, these requirements and the candidate
busses are being shared. The goal is to see if some form of cooperation is possible.
KEYWORDS
INTRODUCTION
During the development of the data acquisition system for the Boeing 777 we determined
that we needed several new transducers. Some of the transducers selected were types with
digital outputs. They had internal compensation to correct for the effects of temperature
and non-linearity. Each type of transducer had a different output characteristic. What we
needed was a standard interface between the transducers and the data acquisition system.
However, at that point in time, there was not enough time to develop a standard interface
so we found different ways to interface the transducers to the data system. After the 777
airplane data acquisition system was completed and testing was under way we decided to
address the issue of defining a standard interface. This would allow the bus to be defined
before we were under time pressure to meet an airplane development schedule. Several
other testing organizations within Boeing were contacted and most agreed that they had
the same problem and that they could also benefit from such a standard interface. So
several organizations put together a team to look into the requirements for such a standard.
That team has completed the process of gathering the user requirements for the standard
interface. The requirements have developed into the requirements for an interface that
could be the basis for a distributed data acquisition system.
GENERAL CONCEPT
One of the requirements that came to light early was the requirement to simplify the wiring
as much as possible. The reason for this requirement is simply economics. It costs money
to install a complex cable so the cabling should be as simple as possible. For airplane
installations it is often necessary to cut access holes in the structure to be able to install a
cable and to restore the structure after the test is completed. So the fewer cables needed
and the smaller the better. In the laboratories the signal conditioning is often located in a
control room which may be up to 1000 meters from the transducers. For a system with
several hundred transducers, this translated into a lot of expensive cables. With this in
mind, the concept that has been developed is for a single pair of wires that will carry
power to the transducers and will return digital information to the data acquisition system
for recording. The concept is not new, and is being used in several industries already. It is
becoming common to find buildings wired this way. For example, the lighting will be
controlled by signals transmitted over the power wires. This simplifies the wiring of the
building and thus lowers the costs, if the electronics costs less than the wiring. So the
general concept is for a system that will interconnect a number of transducers on a single
pair of wires or bus.
There are many advantages to this approach over conventional approaches. A conventional
approach would be to install a transducer in a remote location and run a cable to a
centralized signal conditioner. The output of the signal conditioner would then be fed to a
multiplexer that would drive an Analog-to-Digital converter. The output of the Analog-to-
Digital Converter would then go to a computer or it would be output as a PCM bit stream
to a recorder or telemetry system. The process involves running long cables from the
transducer to the signal conditioner. These long cables are not only expensive to build and
install but they create other problems. The excitation for the transducer needs to be precise
since any variation in the excitation voltage will also show up in the output of the
transducer. The voltage drop in the cables can be compensated for by running more wires
but that just drives up the cost. The long cables also pick up noise that creates errors in the
measurement. Until recently, the size and cost of the signal conditioning components have
prohibited the installation of them within the transducer case or near the transducer.
However, the modern integrated circuit and especially the Application Specific Integrated
Circuit or ASIC has reduced the size and cost of these components. They are now small
enough that we believe that it will be less costly to install the signal conditioning and the
Analog-to-Digital Converter inside the transducer housing or very near the transducer
itself. Having the signal conditioning inside the transducer will allow it to be optimized for
the specific transducer. In addition, not having to run all of the long cables will make it
feasible to measure not only the physical variable itself but other factors such as
temperature or pressure that may cause errors in the measurement. The complex integrated
circuits and microprocessors will allow the output from the transducer to be compensated
for these other variables. Digital filtering can be applied to the output of the Analog-to-
Digital Converter as well to remove any noise that may be in the measurement outside the
band of interest. This all leads to a more accurate measurement for the same or less cost.
The use of digital techniques and microprocessors will also allow the transducer to check
its own operation in ways that are not possible with conventional techniques. However, the
greatest advantages will only be achieved if we work together to define the interfaces to
these devices so that the cost advantages can be realized.
One of the desires of the group is to try as much as possible to use existing technology
where possible. We were aware that much work has been going on in other industries to
develop similar concepts. The process control industry has been making an effort to
develop the “Field Bus” for a number of years. This effort has yet to result in a unified US
standard. However, the ProfiBus, which is being widely used in Europe, is an example of
this technology that is available today. The automotive industry has adopted a system
known as the Controller Area Network, or CAN, for use both in automobiles and in the
factories. CAN was developed in Germany but is in wide use in the US as well as Europe.
CEBus is being used in the building industry and others are known to exist as well. One of
the goals of our effort is to try to determine which of these existing systems can be adopted
as is or modified to meet our needs. In any case the first step was to determine just what
the needs of the various laboratories and Flight Test were. From there we should be able to
investigate the existing systems and then go from there. One effort that we found
promising is the effort in the US by the IEEE and National Institute of Standards and
Technology (NIST) to develop a standard definition of the elements that make up a “Smart
Sensor” and the various device level interfaces. For the purposes of this paper a “Smart
Sensor” is any sensor that produces a digital output. We have adopted a variation on the
IEEE/NIST model for the transducers in our system. Figure 1 is the version of the
IEEE/NIST Smart Sensor model that we developed in response to our customers inputs.
The primary difference is the presence of an analog output that some users wanted for
troubleshooting problems with the system. The use of actuators attached to what started
out as a data acquisition tool is being considered by some of the laboratory organizations.
Buffered
Signal
Isolator
Analog
Output
Transducer
Sensor
or Actuator
Signal
NETWORK
with Signal
Processor
Network Conditioning
Processor/
Interface "Smart Sensor"
Transducer
Electronic per IEEE/NIST
Data Sheet
Definition
SYSTEM CONCEPT
The general concept that has developed is shown in Figure 2. In this concept a number of
busses would exist in the lab or on the airplane that would each connect a number of
sensors or actuators to a central data acquisition and/or control system. The number of
busses required and the number of sensors on each bus would be determined either by the
physical addressing limits on the bus or by the total number of samples required from all of
the transducers. Thus the bit rate of the bus will in many cases determine how many
sensors can be connected to a particular bus. A number of these transducer buses can then
be interconnected to form a complete system. For systems that are located within a small
area the busses can be connected directly to the data acquisition system. If the system is
spread out over a large area then the busses will be attached to a hub or hubs that will then
be connected to the data acquisition system or control room via a high speed data link. The
use of a hub allows each data bus to be less than one hundred meters long. The connection
between the Hub and the Host Controller has not yet been investigated in great detail. This
link may need to be up to one kilometer in length to meet the needs of some of the labs. A
length of less than one hundred meters will meet the needs of systems on board the
airplanes if a hub is required there at all.
Networked Networked Networked Networked
Sensor Sensor Sensor Sensor
Bus 1
Bus 2
Bus 3
Bus 16
The requirement for a number of sensors to share a common data transfer medium requires
that there be some method of sampling the data in the sensor and transmitting the data to
the host when requested by the host or at the proper time within a sampling schedule. In
order to keep the number of wires to a minimum the data transfer should be done serially
over a single pair of wires. Again, in order to keep the number of wires to a minimum, the
power for the sensors should be transmitted over this same pair of wires. The interval
between samples of the output of a given sensor needs to be a fixed, repeatable interval in
order for the data to be properly reconstructed during data processing. The user wants to
be able to determine the order in which events occurred. This requires that the time
between when the input of a given sensor is digitized by the sensor and when it is received
by the host must be repeatable and known. These times need to be definable and
repeatable within ±2 microseconds. In some cases the characteristics of the sensor will
determine how well these two requirements can be met. For example a sensor measuring a
frequency will need to acquire its samples at the zero crossings of the input waveform that
will not be synchronized by the data system. These types of sensors will either need some
time tagging capability or they will not be required to meet the ±2 microsecond
requirement. There is also a requirement to define groups of transducers on a given bus
that are sampled at the same time ±2 microseconds. These sensors will be sampled either
on command from the host or at a particular time within the sampling schedule. The data
would then be held in the sensor until it was time for it to be transmitted to the host. In
order for the transducers to meet these timing requirements some form of common clock
will need to be established for the system. This could take the form of the host
commanding each sensor to sample it’s data and when to return the data to the host. The
other method that is being considered is to have the host provide periodic time commands
and to allow each sensor to maintain the time interval between time commands. Requiring
the system to operate in a command-response mode drives up the amount of traffic on the
bus. However if each sensor maintains it’s own clock, which is synchronized by the time
commands from the host, more complexity is required in each sensor.
The laboratory and airplane data systems users requested that the system have the capacity
to be able to support 12,800 samples per second for each bus. The users specifically ask
that the number of bits in the output of a sensor not be limited to a fixed number. However,
in order to be able to place bounds on the requirements for the bus, some number needs to
be used. For purposes of determining the bit rate, we defined the number of bits per data
sample to include sixteen data bits plus what ever overhead bits that the bus would require.
This allows us to define the minimum bit rate requirement for the bus. The bit rate on the
bus just to support the data transfer is only 204,800 bits per second. Adding in overhead,
error correcting codes and dead time between transfers will increase the minimum bit rate.
We expect it to be somewhere in the order of one million bits per second. The use of a
command-response type of system would probably double this requirement. We still have
the requirement for each sensor to be able to transmit more or less than sixteen bits and
this will increase or decrease the number of sensors that can be supported by a given bus.
The number of bits being output by each sensor will still have to be accounted for in the
total number of bits per second. The sample rates for all of the networked sensors on a bus
shall not necessarily be uniform. In other words, each sensor may have a different sample
rate. This comes from the desire to run one pair of wires into a particular area, like an
engine, and to be able to connect temperature sensors, pressure sensors and perhaps a
vibration sensor to that pair of wires. If all of the sensors had to respond at the same rate
the vibration sensor would drive the data rate and only two sensors would be able to share
a pair of wires.
One of the things which can have a significant impact on the data rate that can be achieved
is the overhead in the data transmission protocol. If a large number of bits are required to
support the protocol compared to a few data bits this can limit the bit rate that can be
achieved. The obvious answer to this problem is to collect the data samples into groups or
packets. This will increase the number of data bits relative to the number of protocol bits
and increase the effective bit rate that can be achieved. There are two areas where this is
not acceptable. In the case of actuators with the sensor feeding data to the actuator over
the bus. In this case the sample received by the actuator needs to indicate the output of the
sensor at the time that it was received, not at some time in the past. The second area where
this is not acceptable is in the current Flight Test Data System. This system has been
designed to require that data be received by the real-time data monitor in the same order
that it was acquired. With a packetized data system, this is not true. Packets containing
data which is being sampled slowly will contain data that is much older than packets
containing data which is being sampled at a higher rate. This is not to say that for other
types of systems the technique of putting the data into packets may not be a useful way to
increase the effective bit rate of the system, but it is not an answer for all systems.
OPERATIONAL REQUIREMENTS
There are several operational requirements that have been placed on the bus and the
sensors by the users. One thing that was universally requested was for each sensor to be
able to identify itself to the host. Each sensor must also have the ability to run an internal
diagnostic to determine it’s own health. Other capabilities such as the ability to read and
write the memories within the sensor over the bus were also requested in order to be able
to setup the system. These requirements conflict with the controlled sample intervals and
known time of the data samples described earlier. In order to meet both requirements two
operating modes are being required for the bus. In the data acquisition mode all of the
timing must be met and no unnecessary communications will be allowed. The setup mode
will allow for all of the communications to be supported and for data to be acquired but
without the requirements for the sample timing to be met. This is a natural way for the
system to be used. During a test the system will be run in the data acquisition mode and
nobody is allowed to request memory dumps or trying to setup a sensor during this time
period. When a test is not in progress the system will be used to set up for the next test and
for trouble shooting problems on the network. It is assumed that when the system is in the
setup mode that the data acquisition schedule will be running in the background without
the timing guarantees.
One of the users primary concerns about this type of system is what happens when a
failure occurs on the bus or in one of the sensors on the bus. If the failure is of the type
which allows the bus to keep running the diagnostic features of the sensors should allow
the operator to quickly determine which sensor is bad and to take appropriate action.
However, if the failure disables the bus the operator needs some way to locate the failure
and take corrective action. Locating a failure when a number of devices are connected in
parallel on the same bus is difficult at best but consideration of this problem will have to
be made part of the bus design. Some things have already been identified and included in
the requirements but the total problem will have to be defined when the system is
designed. One common problem that could take the system down is a continuously
transmitting sensor. To avoid this problem each sensor should have a detector built in
which can detect this condition and shut the transmitter down. The most promising
approach for finding shorts and opens on the bus may be related to the physical bus
topology. A topology or implementation needs to be found which will allow parts of the
bus to be quickly isolated for troubleshooting purposes. Even this may be difficult if the
bus is buried within the wing of the airplane. The use of error correcting codes and other
techniques to make the bus more robust will help but the solution will have to be
determined when the bus. For example if a transmission is in progress when a sensor is
connected that transmission may be technology is selected.
CHANGING A SENSOR
Since we envision a bus which is running it’s data acquisition schedule any time it is
powered up, unless specifically shut down by the operator, what happens when a sensor is
added or removed from a bus. Again some of these things cannot be fully defined yet but
the general concepts can be discussed. The first assumption is that these things only occur
when the bus is in the setup mode. If a sensor is removed from the bus when the bus is
operating its data will simply not appear in the host. The system should have a way of
detecting this event but what happens after that is not yet defined. If a sensor is added to
the bus it should not start transmitting its data until it is integrated into the systems
sampling schedule. This conflicts with the idea that it should come up running when it is
powered up, so some interaction with the host will probably be required after power up
before any sensor starts transmitting data. Another requirement is that the addition or
removal of a sensor from an active bus shall not cause more than a momentary interruption
of the operation of the corrupted. That is acceptable. However, after the transient on the
bus is gone the bus must continue in normal operation without any operator action. It is not
acceptable to have to reset the bus or system to recover from this type of event.
PHYSICAL REQUIREMENTS
There are many physical requirements that must be addressed during the development of
the system. The bus may be required to operate out-of-doors in some of the labs. This will
require that the bus be able to withstand rain, heat, mud and many other environmental
conditions. On the airplane many of the same requirements must be met. In addition
vibration and even greater temperature extremes are normal. The system will be required
to meet stringent EMI/RFI requirements before it can be installed in the airplane. The
exact environmental specifications have not been defined. We know what the requirements
are but we would prefer to use a generally available standard to make it easier for vendors
to qualify parts without working directly with us at least in the beginning of a sensor
development. It is also felt that a recognized standard would be more easily understood by
outside vendors. In some cases, the environmental requirements will be so specific for a
given sensor that a general specification will not be useful. However, we hope that this is
the exception and not the rule.
ANALYSIS OF EXISTING TECHNOLOGY
At the time of writing this paper we are in the process of evaluating the available busses. It
is relatively easy to eliminate some of these busses. CEBus and LONWorks appear to be
to slow to support the data rates that are desired. CAN is marginal. It can support the data
rates but not with the desired cable lengths. It would also require the use of a four wire
cable. CAN has a very desirable attribute in the wide availability of integrated circuits
which support it. Perhaps with some modifications of our system concept CAN could be
used. In the US Fieldbus Foundation has joined forces with World FIP to complete the
work started by ISA. To date this effort has resulted in the definition of the physical layers
of a fieldbus which is covered by the ISA Standard S-50 part 2. This total capabilities of
this bus cannot be determined since the higher protocol layers have a significant effect on
the data sampling rate. The only other Fieldbus being considered is ProfiBus. ProfiBus has
several different speed ranges. The ProfiBus-DP runs at rates up to 12.5 Mbits per second
which would appear to be fast enough to meet all of our requirements. The available
literature indicates that with thirty-two sensors on a bus, each making 512 bit transfers
each transducer can be sampled 500 times per second. This gives a total sample rate of
16,000 samples per second. ProfiBus-DP is a four wire bus instead of a two wire bus but
this may be acceptable or we may be able to adapt it to a two wire bus. There is still quite
a bit of work to be done to determine which bus to use and how to implement it but it
seems to be very possible to accomplish most of our goals.
SUMMARY
The Laboratory Groups within the Boeing Commercial Airplane Company have started a
project that we expect to lead us to a standard way to incorporate “Smart Sensors” into
our future systems. We have talked to a number of other groups within Boeing and they
are following our development with the hope of being able to use what we develop. At this
time we are making our requirements and findings available to other organizations in the
hope that two things will happen. One, we would like to receive input from other
organizations that have similar jobs to do. If there are requirements that we can incorporate
into our documents that would make the final product more useful to other organizations
then we would like to incorporate them. If this happens we would hope that other
organizations would adopt the same system making the technology more widely available.
This would lead to the second thing that we would like to see, lower cost. The more
people using a technology the less it would cost each individual user.
ACKNOWLEDGMENTS
Eugene M. Ferguson
David J. Hepner
ABSTRACT
The yawsonde is a device used at the U.S. Army Research Laboratory (ARL) to
investigate the in-flight behavior of spinning projectiles. The standard yawsonde consists
of a pair of solar cells and slits that respond to solar rays. The sun is used as an inertial
reference to measure the pitching and yawing motions of the projectile. An FM telemetry
package transmits the sensor data to a ground receiving station for analysis. The standard
yawsonde package is housed in an M577-type artillery fuse body. The spinning motion of
the projectile serves as the sampling rate for the measurements. When the spin rate is not
significantly higher than the yaw rate, multiple sets of sensors must be used to effectively
increase the sampling rate. The pinhole yawsonde sensor was developed for projectiles
that require multiple sets of sensors in a very limited space. This pinhole yawsonde
consists of a number of sensors located behind pinholes placed around the projectile's
circumference. Since each pinhole makes a yaw measurement, many measurements, or
samples, are taken with each projectile spin revolution. More pinhole sensors may be
added to increase the measurement sampling rate. One application of this yawsonde is to
aid in evaluating the performance of tactical devices and inertial systems onboard
projectiles with limited space for instrumentation.
KEY WORDS
INTRODUCTION
The yawsonde is a device that is used at the U.S. Army Research Laboratory (ARL) to
investigate the flight dynamics of instrumented projectiles. The yawsonde is an electro-
optical device that uses the sun as a reference point to measure the in-flight yaw, pitch, and
rolling motion of finned and spin-stabilized projectiles. The components of a yawsonde
include a number of silicon photo-sensitive cells (solar cells), a fixture to hold the cells and
to provide a suitable optical field of view, and a mounting arrangement on the projectile or
shell that provides a geometry such that the yawsonde output is sensitive to projectile yaw
and spinning motion. Associated with yawsondes are signal conditioning circuits and a
radio frequency telemeter that transmits voltage signal outputs from the yawsonde to
ground receiving stations for processing. The pinhole yawsonde sensor was designed for
applications that demand low volume and high data resolution. Such applications are
typical of the advanced munitions being developed for modern battle tanks.
Yawsondes are designed to measure the angle between a projectile’s roll axis and a vector
originating at the center of gravity of the projectile and ending at the sun (the solar vector).
This angle is called the solar aspect angle (F). The solar aspect angle will change during
the flight of the projectile. It changes because of trajectory effects and because of the
motion of the projectile’s roll axis about its velocity vector. Figure 1 illustrates the solar
aspect angle and the effects of projectile trajectory on solar aspect angle.
A yawsonde requires at least two sensors and a fixture that defines an optical field-of-view
for each sensor. A sensor generates a voltage pulse every time it “sees” the sun. The
signals from both sensors are conditioned, combined, and transmitted to a ground receiving
station by a telemeter on the projectile. Figure 2 illustrates this process with a simple block
diagram. The resultant output of the yawsonde is a train of pulses, usually bipolar pulses
(also illustrated in Figure 2). The geometry in which the sensors are mounted makes the
duty cycle of this pulse train sensitive to the solar aspect angle. The yawsonde requires
that the projectile be spinning in order to measure the solar aspect angle. Since the
yawsonde uses the sun and the spin rate of the projectile as a sampling mechanism, the
spin rate should be on the order of 10 times the maximum yaw frequency in order to
resolve yaw amplitudes. Detailed descriptions of yawsondes used by the U.S. Army are
given in references 1 and 2.
Figure 3 is a plot of typical processed yawsonde data. Note that the vertical axis is labeled
“Sigma-N.” Sigma-N (Fn) is the complement of F (i.e., Fn = 90o - F). Plotting Fn produces a
graphical zero that represents the middle of the yawsonde’s field of view. The bias in the
plot is produced by trajectory curvature, and the sinusoidal waveform represents projectile
yawing motion.
PINHOLE YAWSONDE SENSOR PARTS
The pinhole yawsonde sensor is composed of four major components: the pinhole plug, the
mask, the solar cells, and the multisensor body. The actual shape and dimensions of the
body vary with the projectile dimensions and the number of sensors that are desired. This
sensor may, therefore, be configured in several different ways.
The four-sensor configuration contains four pinhole sensors, which provide the resolution
needed for measuring the yaw motion of projectiles with spin to yaw rate ratios of 2.5 or
more. Figure 4 shows the components of this sensor. The pinhole plug is the first
component shown. It helps to define the yawsonde’s field of view and furnishes the
pinhole through which a small beam of light may pass. There is a conical void within the
pinhole plug. This void defines the bounds in which the light that passes through the
pinhole may travel. The pinhole plug is threaded and screws into the body of the test
projectile. Two spanner wrench holes are provided to allow the plug to be screwed into
place.
The mask is a thin, nonelectrically conductive, opaque material with three areas cut out of
it. The two longest cutouts form a shape similar to the letter “V.” The width of each these
cutouts is the diameter of the pinhole. The third cutout area is used to align the mask over
the solar cells. The “V” shape of the mask is what permits the measurement of projectile
yawing motion. An explanation of this mechanism appears later in this paper. The angle
between the “V” legs was determined by choosing the angle that would use the most solar
cell surface and, therefore, provide the widest yaw angle measuring range.
The solar cells produce a voltage whenever they are exposed to light. Two solar cells are
required for each pinhole sensor, one for each “V” leg of the mask. The solar cells are also
wired with opposite polarity. This causes them to output voltages of opposite sense; that
is, one cell will produce positive voltages and the other will produce negative voltages.
This bipolar output facilitates spin direction monitoring. There is a brass plate on the back
of each solar cell that helps to keep them from fracturing under the shock of a gun launch.
The multisensor body holds the solar cells and the masks and further defines the field of
view of each sensor. It has holes in it so that indexing screws may be used to secure and
align the body in the test projectile. The body is hollow in the middle to allow wires to
pass through. Although the body shown in Figure 4 is for a four-sensor pinhole yawsonde,
the configuration may be modified to accommodate any number of sensors. These
modifications, however, are dependent upon space availability within the test projectile
and will affect the yawsonde’ s measurement characteristics. The configuration shown was
designed to fit into a projectile with an inner diameter of 0.75 inches.
Figure 5 shows all of the four-sensor pinhole yawsonde parts assembled in a projectile.
This configuration provides each sensor with a 56o field of view and allows measurement
of Fn (90o - F) values in the range of ± 24o. The height of the multisensor body is
0.63 inches.
The pinhole yawsonde sensor, like all of ARL’s yawsonde sensors, relies on the projectile
spin motion to make the yaw measurements. Since the projectile is spinning, whenever the
solar vector enters the sensors’ field of view, a beam of light sweeps across the masked
solar cells. Figure 6 illustrates how the beam sweeps across the mask as the projectile
spins. This beam of light cuts across the “V” legs at different places, depending upon the
projectile yaw angle, or solar aspect angle. Figure 7a shows two extreme yaw angles for a
projectile. Figure 7b shows two extreme paths that may be taken by the beam of light as it
crosses the “V” legs. A voltage pulse is produced by one of the solar cells each time the
light beam crosses a “V” leg. When the beam crosses one leg, a positive pulse is
produced, and when the beam crosses the other leg, a negative pulse is produced. An ideal
set of pulses for extreme yaw angles is shown in Figure 7c.
Figure 8 shows the basic geometry of a pinhole yawsonde sensor. From this geometry, it
can be shown that the distance between the “V” legs is defined by the equation:
At a constant projectile spin rate, the time between pulses will be proportional to the
separation between the two legs of the “V,” and s can be easily determined. When most
projectiles are fired, however, the spin rate will vary with time. This causes the absolute
pulse spacing to change. To determine s in this case, it is assumed that the projectile spin
rate does not change rapidly as the sun goes from one pinhole to the next. Using this
assumption, we can measure the ratio of J to T, where J is the time between positive and
negative pulses and T is the time between positive pulses (Figure 9). Using this ratio
allows F to be determined regardless of the spin rate since J / T for a particular F is
constant for all spin rates. The solar aspect angle, F, may be determined with more
sophisticated algorithms when the spin rate changes rapidly by iteratively determining the
effect of the varying spin rate.
SUMMARY
A pinhole yawsonde sensor has been designed for measuring projectile yaw motion of
projectiles with limited space and low spin-to-yaw rate ratios. The sensor discussed in this
paper focuses on the four-sensor configuration, but it can be configured to meet as many
sensor requirements as space permits.
This sensor has been used in testing sponsored by the Army Research Development and
Engineering Center (ARDEC) X-Rod program.
REFERENCES
ABSTRACT
Over a dozen commercial remote sensing programs are currently under development
representing billions of dollars of potential investment. While technological advances have
dramatically decreased the cost of building and launching these satellites, the cost and
complexity of accessing their data for commercial use are still prohibitively high.
This paper describes Reconfigurable Gateway Systems which provide, to a broad
spectrum of existing and new data users, affordable telemetry data acquisition, processing
and distribution for real-time remotely sensed data at rates up to 300 Mbps. These
Gateway Systems are based upon reconfigurable computing, multiprocessing, and process
automation technologies to meet a broad range of satellite communications and data
processing applications. Their flexible architecture easily accommodates future
enhancements for decompression, decryption, digital signal processing and image / SAR
data processing.
KEY WORDS
INTRODUCTION
A host of remote sensing programs will be deployed in the next five years to meet the
demand for high resolution imaging data for commercial and scientific applications. These
applications include environmental monitoring, precision farming, urban planning, resource
exploration management, and tactical reconnaissance. While technological advances have
decreased the costs of building and launching satellites, the costs and complexity of timely
access to satellite data are still prohibitively high.
Remotely sensed image data with the highest commercial value will have a one to three
meter spatial resolution. The resulting telemetry data stream characteristics (i.e., 100+
Mbps) rival those of NASA’s most ambitious program, the Earth Observing System.
Historically, NASA’s cost to operate and implement high data rate telemetry systems has
been tens to hundreds of millions of dollars.
• Multi-mission support
• Real-time or near real-time processing and distribution
• Autonomous remote operation
• Connectivity to commercial RAID and tape storage devices
• Interoperability with commercial network environments
• Flexibility to meet changing / future requirements
The challenges of many future ground station requirements are handled in TelSys Gateway
Systems which perform the sophisticated telemetry and networking functions required to
interconnect local and wide area networks to satellite communications networks. Figure 1
depicts this interconnectivity.
TelSys has developed an approach which meets these requirements by using a novel mix
of reconfigurable computing technologies, object oriented embedded real-time software,
and object oriented system control and management software.
Reconfigurable computing involves the use of in-circuit reprogrammable hardware
elements to provide the real-time processing of data. By using an array of these
dynamically reconfigurable hardware elements with object oriented real-time and
workstation software, virtually any processing requirement can be accommodated. This
paper describes how these technologies are used to implement systems of unparalleled
performance and functionality at a very low cost.
GATEWAY SYSTEMS
A block diagram of the RCP is shown in Figure 2. The platform is based on the industry
standard VMEbus. This bus, using VME64 extensions, provides a theoretical maximum of
80 Mbytes/second transfer rate. However, arbitration overhead on this bus can
substantially reduce the actual useable bandwidth. Therefore, this bus is used primarily for
the transfer of control and status information between cards.
The real-time transfer of high speed data between cards is handled using either the
standard RACEway Interlink and/or the High Speed Backplane. For the 6U platform, only
RACEway is used. This ANSI/VITA standard provides up to 160 Mbytes/second point to
point connections between cards. The High Speed Backplane is used for 9U form factor
cards and can work in conjunction with the RACEway Interlink. The High Speed
Backplane uses a third backplane connector providing multiple high speed parallel paths
between data processing cards. Six master channels, each capable of sustained data rates
of over 320 Mbps, can be further subdivided into independent subchannels providing over
5 Gbits/second of aggregate transfers.
System Control
External Network-
Object Management based Control
and Status
Master Flash Ethernet Global Other
(ATM, Network &
Controller Memory Interface Memory
FDDI,...) peripheral
interfaces
VMEbus
RACEway
The System Control portion of the Gateway System consists of a number of baseline
functional elements: the Master Controller, Flash Memory and an Ethernet Interface. The
Master Controller card acts as the VMEbus arbiter and oversees all activity in the unit. An
Ethernet interface provides network access and control via remote workstation-based
Gateway Management Software. The system can also be controlled locally via a terminal
interface. Flash memory, residing on front-panel removable PCMCIA modules, stores all
the system boot-up firmware and application code. This allows for very easy field
maintenance and upgrade. Other cards may be added as needed including global memory
for data buffering and additional network and peripheral interface cards. Network and
peripheral interface cards support standards such as ATM, FDDI, SCSI-2, HiPPI and
Firewire.
Real-time data processing is performed using Virtual Information Processor (VIP) cards.
Up to ten of these cards can be accommodated in a RCP depending on the system
processing requirements. The cards feature a generic processing engine based on
reconfigurable computing technology. The processing logic for all high speed data
manipulation is performed in an array of Xilinx 5200 Series FPGA parts. The combination
of these parts provide up to 108,000 reconfigurable gates for algorithm implementation. In
addition, there are a variety of memory elements to provide data buffering, temporary data
storage and look up tables including six 8KByte First In First Out (FIFO) memories, two
16KByte Dual-Ported Random Access Memories (DPR) and two Single In-Line Memory
Modules (SIMM) which can accommodate up to 8MBytes of Static Random Access
Memory (SRAM). The processing algorithms are dynamically downloaded into the RCP at
system boot-up, or during run-time from the PCMCIA flash memory or the network.
Physical input and output interfaces for these cards are provided by plug-in modules based
on the industry standard Peripheral Component Interconnect (PCI) Mezzanine Card
(PMC) format. The 9U VIP subsystem supports three PMC modules while the 6U VIP
supports up to two PMCs. PMC modules are used to implement CPUs, DSPs, serial
interfaces, ATM, SCSI-2, FDDI, NTSC, as well as complex functions such as error
corrections or data compression.
In addition to platform evolution, the library of instances is constantly growing and the
library of PMC modules, which are usable across all platforms, are being enhanced to
include new functions. PMC modules for Rice decompression, MPEG-2 decompression,
and TAXI are currently planned. Integration of several third-party modules are also
planned for Digital Signal Processors (DSP), Fibre Channel and PCMCIA interfaces.
In addition to the VIP hardware elements, a key feature of a Gateway System is that it
provides a high-performance software environment that supports dynamic reconfiguration.
The Local Control Software (LCS), which is layered on top of Wind River Systems’
VxWorks real-time operating system, provides a client-server architecture with an open
framework for the application of generic Gateway Systems and custom, application-
specific subsystems.
LCS, shown in Figure 5, includes two categories of reusable software components: run-
time system service components and general-purpose subsystem software component
templates.
In the way of run-time system services, LCS includes a number of software components
that provide services for:
• Managing the allocation of memory and other limited resources across subsystems
In addition, LCS also provides a number of software modules that can be used verbatim to
control and monitor many application software subsystems and, when necessary, can be
extended for specific applications. Thus, an application developer can build a software
subsystem simply by implementing application-specific subroutines and linking them with
the OTS components provided in the LCS libraries. These application-specific subroutines
effectively overload generic subroutines that are already implemented in the LCS libraries.
Some of the LCS subroutines include:
Since the processing elements of a Gateway System are based on the same configurable
VIP hardware elements, the LCS software provides a means of boot-strapping the system
to a neutral state, so it can then be configured for application-specific space data
networking operations. What this means is that the actual configuration of a system is
determined at run-time, rather than a-priori, from configuration files that specify which
processing instances are loaded into each VIP hardware element and which software
subsystems are allocated for execution.
Upon system boot, a generic software subsystem is assigned to each VIP hardware
element. A series of master controlling elements then assigns application-specific
subsystems (and VIP processing instances) to hardware elements, based on run-time
interpreted files. This set of master controlling elements includes the Hardware Resource
Database, Master Instantiator, Subsystem Manager, Status Buffer Manager, Message
Queue Manager, Alias Manager and Local Instantiator.
Ga t e w a y S y st e m
T e le m e try D a ta
A n a ly s is
CM D s
S ta tu s & C o ntro l
G ate w ay M an age m e n t Te l e m e t ry D at a D a ta
S of tw are ( G M S ) C atal og M an age m e n t S e ts
[SN M P , C O R B A , W e b] [W e b]
A u t om at e d O p s
R e m o te M o n ito rin g
P ro c e ssin g C o n t r o l Te l e m e try D at a
S ta tio n R e p os i tory
GUI
R/L D a ta Stre a m
CONCLUSION
A worldwide explosion of advanced remote sensing and space science programs will drive
the demand for more advanced ground station processing systems supporting a much
broader, and more mobile customer base. These systems must provide greater performance
and functionality while being significantly lower in cost and size. Gateway Systems meet
the next generation program needs. They incorporate the best of current object-oriented
hardware and software technologies including on-the-fly reconfiguability in a fraction of a
second.
ACRONYMS
Rodney M. Homan
Naval Air Warfare Center, Aircraft Division
Patuxent River, Maryland 20670
Data Processing and Display Branch 515100
ABSTRACT
The CAIS Toolset Software (CTS) provides the capability to generate formats and
load/verify airborne memories. The CTS is primarily a software applications program
hosted on an IBM compatible portable personal computer with several interface cards. The
software will perform most functions without the presence of the interface cards to allow
the user to develop test configurations and format loads on a desktop computer.
KEY WORDS
BACKGROUND
The CAIS is a time division multiplexed digital data acquisition system consisting of a
family of building blocks interconnected via the CAIS bus. The system can handle output
data rates from 2 kilobits per second to 50 Megabits per second (Mbps) in word lengths of
12 and 16 bits. The CAIS is fully programmable with a capacity of at least 8,000 input
channels. The output data is multiple IRIG-compatible Pulse Code Modulation (PCM)
data streams for telemetry and recording with additional special purpose data streams.
The PFU is primarily a CAIS ground support unit, hosted on a minimum 386 IBM
compatible personal computer (PC), 8 MB RAM, 14" VGA Monitor, with mouse or
mouseless system. The PFU will support two interface boards--the CAIS Bus Interface
(CBI) card and a Synchronous Data Link Control (SDLC) card. This provides the user
with the flexibility to select the PC (portable or otherwise) that meets their size,
environmental, ruggedability, and TEMPEST requirements.
The CTS software which resides on the PFU provides the capability to generate and
modify formats; load, modify, and verify the memory contents of CAIS airborne Line
Replaceable Units (LRUs); verify airborne system configuration; and display/record the
results of all significant operations. Decommutating and limit checking raw PCM data will
be available as user-selected Commercial-off-the-Shelf (COTS) application programs. This
will allow the user maximum flexibility in selecting an appropriate COTS program to
obtain the desired PCM analysis capabilities. Interfacing to other applications will be
supported using Telemetry Attributes Transfer Standard (TMATS) files.
The CTS is designed to allow the user to create tests for an instrumentation system where
the hardware configuration is evolving based on a required set of parameters or for an
instrumentation system with a predetermined hardware configuration. The CTS may also
be used to design a test where both hardware configuration and parameter definitions are
being developed. The CTS's reports, user input validation, and test validation features may
be used to help the user analyze test data. Based upon the results of the analysis, the user
may modify the test through the user interface features which include adding, copying,
and deleting the test entities. Parameter connections to channels and buses can also be
modified.
there is a one-to-many
SCCs, SCMs
relationship between a Tail and a KEY:
ONE ONE MANY
Test. This means each Tail may CHANNEL/
BUS
TO TO
ONE
TO
MANY
Parameters are associated at the Test level, not the Tail level to allow each Test to stand
alone and be composed of only the parameters required for that test. Each parameter in the
test is defined by the user as being associated with a specific signal type thus allowing the
user to define parameters prior to defining the specific hardware configuration or prior to
connecting the parameter to a specific hardware channel. As the user defines the hardware
configuration and connects the parameters to the hardware channels, the CTS verifies the
parameter signal type matches the signal type inherent to the hardware channel.
DESIGN APPROACH
The implementation of CTS was designed using Object Oriented Design (OOD) and
implemented using Object Oriented Programming (OOP). The CTS Computer Software
Configuration Item (CSCI) consists of Computer Software Components (CSCs) called
classes and Computer Software Units (CSUs) called operations. The CTS CSCI is
designed to operate within the Microsoft Windows 3.1 environment. The Microsoft Visual
C++ (MSVC) Windows Development System and Tools were used for the CTS CSCI
implementation. MSVC is an interactive development environment that provides separate
tools for: laying out the Windows program skeleton, designing the user interface,
generating, compiling, linking and debugging code. The design of the CTS CSCI makes
extensive use of the Microsoft Foundation Class (MFC) Library application framework.
The MFC application framework not only provides a superset of the C++ class library, but
also defines the application structure of the Windows program. The CTS CSCI will be
compiled and linked for 16-bit architecture, but options for 32-bit architecture machines
are available and will be implemented in life cycle.
In order to allow the user a more extended view of the test information, CTS makes use of
what is known in the Microsoft software world as modeless dialog boxes. Most Windows
programs require a user to cancel or remove a screen before working in another screen.
These screens are displayed as Modal Dialog boxes. Modeless Dialog boxes allow screens
to be displayed continuously, thus allowing the user to input data in multiple screens
without actually reopening them. This will allow the CTS user to view and enter parameter
information, hardware information, and format information without opening, closing and
reopening screens.
FUNCTIONAL AREA DESCRIPTION
The functional areas of CTS are dependent on the basic CAIS functionality and were
determined through extensive user requirements analysis. The CTS CSCI descriptions are
provided in Table 1 and the functional areas are shown in Figure 2.
.ini File
PFU
Control
(goes to all
functional areas) .rpt File
Loadfile
TMATS
File
Loading
Reports
*LRU = Line Replaceable Unit
The CTS user interface is a Windows environment which guides the user in defining test
data. This takes advantage of menus, pull-down lists, various selection buttons, and the
enabling/disabling of relevant control buttons. This design limits the user to valid choices
rather than free entry. Test information is included in report files, loadfiles, and database
files. The CTS user interface manages these files and the data within them to avoid
incorrect manipulation of test information and to ensure consistency.
Through the user interface, test entries may be copied within or between tests, depending
on the entities. These include parameters, formats, DAU's, CAIS Buses, and Hardware
Configuration definitions. Loadfiles and report files may also be copied.
The CTS allows for multiple setup files (*.ini files) to store/retrieve default preferences
tailored to user needs. These preference settings control the view of the screen, report
setups, and the test creation defaults.
A button bar is provided of frequently used functions such as open and save tests, modify
preference settings, and perform Initiated Built-in Test (IBIT) on PFU hardware and
airborne units.
The order of the top level menu bar and associated pull down menu items allows the user
to logically traverse the CTS. The menu items are Tail/Test, Parameter, H/W (Hardware)
Configuration, Format, Load/Verify, Window, and Help. The user begins by creating and
opening a test. The user then defines parameters and hardware configuration. Finally,
formats may be populated and loadfiles generated. If the PFU is configured with the CAIS
Bus Interface card or the SDLC interface card, then Load/Verify functions associated
with uploading and downloading the airborne units is enabled.
TAIL/TEST
Tests may be created, copied, saved, and closed through the CTS. As part of tail/test
information, the user may change, save and restore default preference settings. These
defaults allow for:
" Two lines of text for customized report header titles.
" A maximum Parameter Mnemonic length between 8 and 32 characters
" Default Format Word size of 12 bits or 16 bits
" Default frame sync word pattern
" A choice to display lists of parameters on various CTS screens sorted by Name or
by Mnemonic
" A choice to display selected values on various CTS screens in HEX, Decimal or
Octal
" Default filenames for reports.
" Complete Test Description (CTD) - a default set of reports chosen from a list of
available reports. These may be output with a keystroke.
The user is given the opportunity to override a number of these default preference settings
throughout the CTS session.
PARAMETERS
New parameters may be created and existing parameters opened to modify definitions.
The user selects the parameter to open by choosing from a list of existing parameters
sorted by mnemonic or name. Once a parameter is open, all the existing parameter
information is displayed on a screen as shown in Figure 3.
The user may enter signal range and related engineering unit information, and choose a
signal and calibration type for the parameter using this screen. Once a calibration type is
chosen, the user supplies details in sub-screens displayed via the setup buttons. The screen
displayed depends on the type chosen. For example, if Data Pairs is the chosen
calibration type, selecting the setup button displays a screen to collect the data pairs from
the user. If Coefficients is the chosen type, selecting the associated setup button displays a
screen to collect the coefficients from the user.
The Sample Information box on this screen allows the user to enter the required sample
rate for the parameter which is used by CTS for automatic format generation. The user
also enters the parameter sample size in bits. If the parameter sample size is greater than
the format word size, multiple consecutive format words will be required to collect the
parameter in the PCM stream. CTS assists the user in this task by enabling the
appropriate number of multimnemonic syllable edit boxes on the screen allowing the user
to supply additional mnemonics for this purpose.
Figure 3. Parameter Setup
The total gain and offset is displayed on this screen. The user supplies the information
contributing to these totals by using the associated setup button. Hardware connection
information for the parameter is displayed for information only. The user actually defines
and modifies these connections using the H/W Configuration menu items and screens.
Through the H/W Configuration screens, the user defines the airborne unit H/W
configuration and defines the parameter connections to the H/W. Once a user chooses a
specific controller or stand alone CAIS DAU configuration, CTS enables the appropriate
menu items to allow the user to further define the test hardware configuration.
Controller configurations allow the user to define remote DAUs at specified CAIS Bus
locations. Depending on the controller chosen, the user may define Remote DAUs on 1, 2
or 3 CAIS Buses. Each CAIS Bus configuration is displayed on a separate screen. The
user may choose to add a DAU to the CAIS Bus through the Add button which displays a
screen to collect the DAU type, DAU ID, and user comment information for the remote
DAU. Once a remote DAU is added to a CAIS bus, it may be highlighted on the CAIS
Bus screen and copied, deleted, edited, or moved in a manner consistent with the ADD
button.
In order to configure the specifics inherent to each remote DAU, the user highlights the
DAU and chooses the Configure button. The Configure button displays a context sensitive
screen depending on the DAU type. These screens display default settings for the DAU
which are modifiable by the user. CTS provides the capability to connect parameters to the
DAUs in this same manner; by highlighting the DAU and choosing the Parameter Connect
button. The Parameter Connect button is also available from the specific DAU screen
displayed by the Configure button. The Parameter Connection Screen displays a list of
defined parameters. The user may highlight a parameter and connect or disconnect it from
the DAU hardware.
If a standalone configuration is chosen by the user, the screen to configure that DAU is
automatically displayed with defaults modifiable by the user. The Parameter Connect
button is available from the Configure screens and functions consistently with the
Parameter Connect button described in the previous paragraph.
FORMATS
The user defines formats either manually or by choosing automatic format generation. To
manually create a format, the user places parameters (which are chosen from a displayed
list of defined parameters) in the cells of a matrix representing the PCM format. Once a
parameter is in a cell, the user has quick access (by double-clicking that cell) to view the
parameter attributes and hardware connection information. If a parameter requires multiple
syllable mnemonics as defined in the Parameter section above, CTS automatically places
them in the format consecutively when the first parameter mnemonic is placed in the
format.
The user may choose to have CTS automatically generate formats. The user chooses which
parameters to include in the format and CTS generates the format based upon the required
sample rates provided (refer to the Parameter Open screen above). The automatic format
generation algorithm is designed to optimize bit rate. Once the format is generated, it is
displayed in matrix format and may be edited manually.
LOAD/VERIFY
In order to reduce loadfile compile time, loadfiles may be generated on an individual basis
or test basis. In order to conserve disk space, CTS provides the capability to compress
loadfiles.
REPORT GENERATION
The reports are generated from the database tables or generated from results of CTS
processes such as loading airborne memory or CTS BIT. A Complete Test Description
(CTD) may be chosen through the Reports screen allowing user to quickly output the
selected set of reports. The CTS provides reports for:
" All Parameters " Selected Parameters " Database Hardware Configuration
" Unused Parameters " Configuration Match Test " Inititated BIT
" Used Parameters " CTS BIT " Formats, either in matrix form
" LRU Memory Load/ or tabular form
Compare
TMATS TRANSLATION
HELP
CTS help is available on a screen by screen basis or by choosing the appropriate topic
from a Table of Contents.
COTS
In addition to the MFC Library, Commercial Off-the-Shelf (COTS) software products are
a part of the CTS design. The MFC Library is the foundation of the CTS system. The Grid
Tool is integrated into the User Interface layer to provide large matrix display and
manipulation. Crystal Report Writer generates reports from the Paradox database tables
and PKWARE provides file compression and decompression. Both are accessed from the
Application layer classes. The Paradox ODBC driver software interfaces with the Record
Set Layer and the Paradox formatted database tables. Smart Heap provides memory
management and is accessed from the User Interface, Application and Database class
layers. Crystal Report Writer accesses the Paradox formatted database tables directly.
SUMMARY
The CAIS Toolset Software is being developed by the CAIS Joint Program Office as part
of a DoD effort to develop airborne flight test capability that will facilitate commonality of
instrumentation between aircraft types and interoperability between all test ranges. The
CTS will support a broad array of functions such as generating and modifying formats;
loading modifying and verifying memory contents of CAIS LRUs; executing BIT;
verifying airborne system configurations; and displaying/recording results of all significant
operations. Additional functions, such as decommutating and limit checking can be
performed through associated COTS applications programs. The CTS design implements
these functions based on the CAIS standard configurations using Object Oriented Design
in a Microsoft Windows 3.1 environment. The resulting application is user friendly,
efficient and can be modified easily through the OOD module approach.
DIGITAL FILTERING OF MULTIPLE ANALOG CHANNELS
ABSTRACT
The traditional use of active RC-type filters to provide anti-aliasing filters in Pulse Code
Modulation (PCM) systems is being replaced by the use of Digital Signal Processing
(DSP). This is especially true when performance requirements are stringent and require
operation over a wide environmental temperature range. This paper describes the design of
a multi channel digital filtering card that incorporates up to 100 unique digitally
implemented cutoff frequencies. Any combination of these frequencies can be
independently assigned to any of the input channels.
KEY WORDS
Digital Filtering, Digital Signal Processing (DSP), Telemetry, Signal Conditioning, Anti-
aliasing.
INTRODUCTION
Anti-aliasing filtering of transducer inputs to a sampled data system have historically been
implemented with analog filters. Typically a six pole butterworth filter with a cutoff
frequency equal to the maximum signal frequency has been used. If the bandwidth of the
system is limited, filters with more poles could be used to decrease the over-sampling
ratio. Analog filters requiring high accuracy, over a range of temperatures and for long
periods of time, are expensive and difficult to design.
If the DSP and PCM are synchronized, then there are constraints that must be imposed on
the format to allow proper DSP operation, and each of the filter coefficient tables and
decimation tables must be tailored for each PCM format. Asynchronous operation was
chosen for maximum flexibility and minimum development cost for each application.
SAMPLE RATE
Because the DSP is a sampled input system, it still requires that something be done to
prevent aliasing of the input signal components. If the signal energy spectrum is not known
in advance (as in this case) then some type of analog filter is still required. In order to keep
the analog anti-aliasing filter simple and of minimal effect to overall response, it is
necessary to have the sampling rate much above the analog filter. We chose a 4 KHz two
pole Butterworth analog filter, which gives about 40 dB of attenuation at the lowest DSP
aliasing frequency (at 41.7 KHz - 1.3 KHz = 40.4 KHz). A problem with going above a
41.7 KHz sample rate is that the calculation time increases because more points are taken
and this increases the coefficient table length, which increases processing time. Also the
common gain stage of the signal amplifier and the ADC must work faster.
FIR FILTERS
The type of digital filter used for this design is the finite impulse response (FIR) filter. The
FIR filter gets its name from the fact that it responds to an impulse input for only a finite
amount of time. This is because the filter has no feedback of the output signal to be mixed
with the input signal. The output is only a linear combination of present and past inputs
(see Figure 1). There is no analog equivalent to this type of filter. One problem with this
type of filter is that if the sampling frequency is much higher than the cutoff frequency, a
very high order filter is required, resulting in a large number of clock cycles to calculate
each output point. Since the output values do not rely on knowledge of previous output
values, then decimation of the input sample frequency to get a lower rate output sample
frequency can be achieved by calculating only those values desired. The result is a much
faster filter. Another option is multistage FIR filters with the first stages acting as anti-
aliasing filters for later stages. The resulting sum of stages is faster than using only one
stage.
FIGURE 1
FIR FILTER BLOCK DIAGRAM
Referring to Figure 1, the 99th order FIR filter works as follows. The 16 bit input signal is
multiplied by a0 and added to the previous input multiplied by a1. This sum is then added
to the input from two time periods ago (multiplied by a2) and so on until the sample from
98 periods ago is multiplied by a98 and added into the sum.[1, p68]
DECIMATION
Because the DSP filter is after the ADC converter, there can still be frequency fold back or
aliasing problems. If the sample rate of the ADC is the same as for the analog filter case,
then the same accurate analog anti-aliasing filter would be required. To get around this
problem, the input is sampled at a much higher rate than the output sample rate. This
allows a much simpler input anti-aliasing filter. However, for the output rate of the DSP
filter to be at the proper lower system rate, a process known as decimation is used. In this
process, only selected output values from the filter output calculations are used. If the
input sample rate is some integer multiple of the output rate, the process reduces to a very
simple procedure of only outputting every Nth output calculation, where N is the ratio of
input to output sample rates. Because the other output values are not used, they do not
need to be calculated, and the total amount of calculation time is reduced
proportionally.[1, pp87,93,126]
FILTER STAGES
With an FIR filter, the farther apart the sample rate is from the cutoff frequency of the DSP
filter, the longer the coefficient table (number of calculations per output data point). Given
the DSP chip used, it was decided to limit the coefficient table to 256 points as part of the
DSP memory allocation strategy. While the number of calculations is reduced by
decimation, the calculation length of 256 required that the filtering be done in multiple
stages. It was found that with 256 points, the early stages could decimate up to 20 times,
while the last stage, with its sharper cutoff is limited to a decimation of eight. Available
memory and processing time limited the design to four stages maximum.
With an input rate of 41667 SPS, three stages of 20 decimation and one stage of 8
decimation, the result 41667/(20x20x20x8)=0.65 SPS. For a sample rate to cutoff
frequency ratio of four, 0.16 Hz is the lowest frequency filter that can be calculated.
CUTOFF FILTER
While the specification is +/-0.05 dB up to Fc, and -65 dB at 2 Fc, most filters are actually
meeting +/-0.005 dB at Fc, ignoring any effects of the analog front end filter. At 2 Fc, most
filters are in a null of at least 72 dB in the output stage. The analog filter adds 3.01 dB at
4000 Hz, 0.26 dB at 2000 Hz, and 0.02 dB at 1000 Hz. The filter coefficients were
obtained using a DSP filter design program from Momentum Data Systems, and are Kaiser
Window designs. A typical filter response is shown in figures 2 and 3. [2]
GROUP DELAY
Filters with the very sharp rolloff exhibit an associated large group delay. Basically group
delay in DSP is due to each calculation and decimation. Because these are fixed for a
particular filter, the filter delay is fixed, therefore the phase shift verses frequency is linear.
INTERNAL CALIBRATION
Calibration takes 256 samples of input and averages them to get a level relatively free of
noise. An error delta based upon the input signal verses the desired output as defined in a
look up table is computed.
Calibration attempts to correct any parasitic effects that would cause a deviation in the
card's transfer function from its ideal value of Y=G*V+O where Y is the output, G is the
gain, and V is the input. Gain calibration is fully defined for all of the card's gain/channel
combinations. The ideal set points can be set by the user. Gain calibration is performed
after the final filter stage calculation. A subsequent offset calibration is performed. Since
gain and offset calibration interact, successive iterations must be performed until the
interaction error reduces to a value below the quantization level.
FIGURE 2
ROLLOFF BODE PLOT
FIGURE 3
ROLLOFF BODE PLOT
A percent tolerance (customer definable) for each gain/channel for both zero scale and full
scale is available. This sets the limits on how far calibration will attempt a correction
before the DSP assumes fault condition exists and halts subsequent computations..
Calibration can be performed with a precision reference input signal, or with the actual
aircraft signal. Calibration can also be performed with an internal input short, or an internal
input reference (via the bridge balance DAC). If the internal reference is used, a look-up
table sets the DAC. During calibration the offset DAC is also set to a value from a
calibration lookup table.
BOARD TEST
The board contains boundary scan logic for a digital Field Programmable Gate Array
(FPGA) and an equivalent port on the DSP to ease board level test. All DSP firmware is
located on a card mounted Electrically Erasable Programmable Read Only Memory
(EEPROM) that can be reprogrammed without removing it from the PCB, allowing
updates or reconfiguration without modifications of the hardware.
FILTER SELECTION
The card has pre-loaded filter coefficients with 0.1% accuracy every 10% value from
0.196 Hz to 1000 Hz. Due to the two pole 4 KHz analog filter used before the ADC, there
is a slightly greater error above 1000 Hz.
For a 6 pole analog Butterworth filter, the -0.01 dB point is at 0.6 Fc and the -72 dB point
is at 4 Fc giving a pass-band to cutoff ratio of 6.7:1. For the typical DSP filter in this
design, the -0.01 dB to -72 dB cutoff ratio is 2:1, or only two times the Nyquist limit. An
equivalent analog filter with regard to amplitude would be 13th order. This can be used to
reduce the required bandwidth to 30% of that required for the same signal using 6 pole
analog filters.
The DSC-108 is an 8 Channel Differential Bridge Conditioning Card designed for use in
Aydin Vector's PCU-8XX Series II family of data acquisition systems. See Figures 4 and 5
for the DSC-108 block diagram. Each channel consists of an instrumentation amplifier
with 16 software programmable gains followed by a digital filter with up to 100 software
programmable cutoff frequencies from 0.215 Hz to 2.15 KHz, plus an unfiltered mode.
The offset can be adjusted on a per channel basis in the DSC-108 overhead. This will
allow an input of 10 Volts/Gain between +10V and -10V (i.e. for a gain of one, the input
could be set for -10 V to 0 V, -7.5V to +2.5 V, -2.5V to +7.5V, 0V to 10V, etc.). The
digitized ADC output has a full scale range of 48 to 4048 counts. There is also a hardware
FIGURE 4
INPUT STAGE BLOCK DIAGRAM
FIGURE 5
OVERALL BLOCK DIAGRAM
programmable unipolar constant voltage excitation source provided on the card. Bridge
excitation voltage is fixed at 2.5, 5, or 7.5 volts with a strap option that affects excitation
outputs on a channel-pair basis.
The DSC-108 also supports zero calibration ("CAL1" disconnects external inputs and
shorts the amplifier inputs to ground) and shunt calibration ("CAL2" connects a user
installed resistor in parallel with one leg of the bridge). These modes can be asserted
through the PCU-8XX overhead via external switches. The DSC-108 provides a bridge
balance circuit per channel. Each balance circuit is designed to provide up to 10V of
correction voltage to balance a bridge. Bridge balance scaling resistors are provided on the
card for each channel. A resistor is connected to the negative input on each channel and is
mounted on terminals so they can easily be changed by the user. An input voltage
calibration mode can be utilized by connecting the bridge balance circuit directly to the
channel input. The open circuit deflection voltage available is 0 to 10 volts.
The eight channels of the DSC-108 can be randomly accessed and output into the PCM
stream through the PCU-8XX overhead. If the system is running at less than 16 bits per
word, the LSBs of the words will be truncated. The truncated bits can be recovered in the
following word by performing an "extended read" command. Note that, although the input
ADC is 12 bits, if the input noise is random and at a higher frequency then the DSP filter,
more then 12 bits of resolution are available. This is due to the phenomena of dithering.
CONCLUSION
The use of DSP in signal acquisition filters will substantially increase accuracy over
existing analog filters without the increase in cost that would be associated with better
analog filters. However, the use of asynchronous sampling must be weighed against the
increased accuracy of the filters and the sharper roll off. The use of DSP also allows the
changing of filter parameters with remotely controlled software changes as opposed to the
changing of physical parts in an analog filter.
REFERENCES
1 Mar, Amy, ed. Digital Signal Processing Applications Using the ADSP-2100
Family, Volume 1, Englewood Cliffs, NJ: Prentice-Hall, Inc., 1992
2 QEDesign 1000 for Windows, Version 2.0, (program and instructions) Costa Mesa,
CA: Momentum Data Systems, 1993
Mission Analysis and Reporting System (MARS) - EW Analysis
and Reporting On A Personal Computer
ABSTRACT
In response to the need to analyze and report upon Electronic Warfare (EW) test
data results in a comprehensive and uniform manner, the Mission Analysis and
Reporting System (MARS) has been developed.
Keywords
INTRODUCTION
The user can selectively peruse and analyze the test data in the mission data
tables including the execution of sorts, calculation of statistics, and exclusion of
rows containing anomalous data. Furthermore, the user can execute a standard
set of mission analyses or export the data to other commercial applications such
as Microsoft Excel to perform ad hoc analyses. MARS provides basic mission
editing, graphics, and analysis capabilities within the framework of the graphical
user interface provided by Microsoft Windows.
Analysis Capabilities
MARS offers a vast number of analysis features that are generic to the software
and are available for any of the standard file formats that MARS accepts. These
features include:
Polar Plot Utility. This MARS analysis capability can produce a polar graph of
magnitude versus direction from any user-specified MARS table or on any
specifically defined ASCII input data file. Typical operation is for the user to select
a magnitude field and a directional field while browsing a MADB table and
selecting the Polar Plot option. MARS will then create an ASCII polar file and
execute the Polar Plot Utility to produce the graph as shown below.
While in the Polar Plot Utility, the
following options are available:
Strip Chart. The Strip Chart capability within MARS provides a graphical XY plot
of up to five numeric fields from any open MARS table. The strip chart graphic is
presented in it’s own window which may be moved, resized, or minimized by user
control. One excellent feature of Strip Chart is the data animation feature. This
feature provides a dynamic plot of the selected data fields whenever the table is
scrolled forwards or backwards. The graphical representation scrolls to reflect the
user’s position within the data table. A sample strip chart plot is shown below.
Contained in the window is the simulated scope depiction, the current record time,
and symbol icons representing all active threats/records occurring at that time.
Symbol separation is also presented along the side or bottom of the window to
alleviate overlapping. An example of the RWR Scope window along with the
associated RWR Scope Display Table is presented below.
Animation playback of the scope is controlled by scrolling the RWR Scope Display
Table forward or backward. As the table is browsed, the time from the first table
record is passed to the scope process and all table records sharing that exact
time are plotted on the scope.
RWR Simulated Scope Display
Currently the reporting capabilities performed by MARS fall into three general
areas: Jammer Effectiveness, RWR System Performance, and Tracking Errors
Statistics. Reporting analyses performed within each area are shown and
explained below:
• Jammer Effectiveness
Reduction In Lethality (RIL)
Reduction In Shot (RIS) probability
Reduction In Hit (RIH) probability
Increase In Survivability (IIS)
Threat Alt #Passes #Shots #Hits P(hit) P(surv) #Passes #Shots #Hits P(hit) P(surv)
Category WET WET WET WET WET DRY DRY DRY DRY DRY RIL IIS RIS RIH
------------------------------------------------------------------------------------------------------------------------------------------
WEAPON-1 H 1 10 2 20.00 10.74 1 15 9 60.00 0.00 77.78 100.00 33.33 66.67
L 2 20 8 40.00 0.60 1 15 11 73.33 0.00 63.64 100.00 33.33 45.45
WEAPON-3 H 0 0 0 0.00 0.00 1 10 8 80.00 0.00 0.00 N/A 0.00 100.00
================================================================================================================
RT/AO Details for WEAPON-2 /Msn: 1111
Reject criteria:
Ignore Conflicts: YES
Rng limits......: N/A
Dep Ang limits..: N/A
Max Ant Error...: N/A
Max Resp time...: 10.0
Max Ageout time.: 10.0
0.0-0.5 0.5-1.0 1.0-1.5 1.5-2.0 2.0-2.5 2.5-3.0 3.0-3.5 3.5-4.0 4.0-4.5 4.5-5.0 > 5.0
Response ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
12.5 0.0 12.5 12.5 12.5 25.0 0.0 0.0 0.0 25.0 0.0
0.0-1.0 1.0-2.0 2.0-3.0 3.0-4.0 4.0-5.0 5.0-6.0 6.0-7.0 7.0-8.0 8.0-9.0 9.0-10.0 >10.0
Ageout ------- ------- ------- ------- ------- ------- ------- ------- ------- -------- -------
0.0 42.9 0.0 0.0 14.3 0.0 0.0 28.6 14.3 0.0 0.0
CONCLUSION
The Mission Analysis and Reporting System is currently being used for DT&E and
OT&E activities and is rapidly standardizing EW analysis and reporting. MARS
usage streamlines test and analysis turnaround times and enhances overall test
engineer capabilities. The usage of MARS within the Microsoft Windows
environment allows data, reports, and graphics to be rapidly taken from MARS
and placed within test report documents.
This document only provides a short summary of some of the capabilities of the
current MARS software application. MARS is owned and controlled by the US
Government and is available to all DoD agencies. There are numerous other
functions and features of MARS that were not illustrated within this document, for
more information concerning MARS, contact Mr Neal Urquhart at (DSN) 872-8470
or (904) 882-8470.
REFERENCES
Mission Analysis and Reporting System Users Guide. Ken Burton & Neal
Urquhart, 96 Communications Group, AFDTC, Eglin AFB, FL. 1996.
Grant H. Tolleth
Boeing Commercial Airplane Group
Flight Test Engineering
Seattle, Washington
ABSTRACT
This paper describes the Production Monitor (PM), a result of integrating very
diverse hardware architectures into a compact, portable, real time airborne data monitor,
and data analysis station. Flight testing of aircraft is typically conducted with personnel
aboard during flight. These personnel monitor real time data, play back recorded data, and
adjust test suites to certify or analyze systems as quickly as possible. In the past, Boeing
has used a variety of dissimilar equipment and software to meet our testing needs. During
the process of standardizing and streamlining testing processes, the PM was developed.
PM combines Data Flow, VME, Ethernet, and PC architectures into a single integrated
system. This approach allows PM to run applications, provide indistinguishable operator
interfaces, and use data bases and peripherals common to our other systems.
KEY WORDS
BACKGROUND
Most of Boeing’s flight testing is accomplished with two data systems, the Certification
Flight Test Data System(CFTDS) and the Portable Airborne Digital Data
System(PADDS). Due to its small size, PADDS is used heavily for in-service
testing(actual airline revenue flights). PADDS also gets a great deal of use locally for
production testing conducted on pre-delivery aircraft. PADDS, like the certification
system, contains acquisition hardware, a tape recorder, and a data monitoring system. The
PADDS data monitor is used for real time data monitoring or for data reduction. As a data
reduction tool it is used either on board the aircraft, in a hotel room (in remote service), or
at a customer’s site.
During the planning for the certification of B777, Flight Test determined that the capability
of our present airborne data systems would be exceeded. This meant that both the CFTDS
and PADDS would have to be replaced.
APPROACH
While developing the data systems requirements, Flight Test decided to make the new
PADDS compatible with the CFTDS. The new PADDS, PADDS II, contains a duplicate
of each of the main functions of the CFTDS. When we specified the new acquisition
multiplexer, the Central MUltipleXer (CMUX), two versions differing only in enclosure
size and appearance were specified, designed and built. The certification system recorder’s
electrical interface and communications protocol were duplicated and built into a smaller
recorder. Finally the CFTDS Airborne Data Analysis and Monitor System (ADAMS)
functions were repackaged into a significantly smaller unit; the result was the Production
Monitor (PM).
As shown in Figure 1, there are four basic components to the CFTDS: signal conditioning,
multiplexing and data selection, recording, and monitoring. The ADAMS sets up the
acquisition system and provides several real-time data monitoring stations, as well as
printers, strip chart recorders, and dedicated displays, called Panels. Test stations are PC
based. Graphics are provided on a ruggedized HP 730 workstation. A central file server,
the File Server Assembly (FSA), provides access to mass storage for the system.
Application Processor Assemblies (APAs) provide for application program execution,
peripheral device management, and system status reporting. The Acquisition Interface
Assembly (AIA) provides data selection, engineering unit (EU) conversion, and
management of the measurements being processed.
Figure 2 shows the PADDS II system production configuration. The most outstanding
difference from the CFTDS is the replacement of ADAMS with the PM. The PM, like
ADAMS, sets up the acquisition system, and provides monitoring of real time data on a
video monitor, printer, strip chart recorders, and dedicated Panels displays. The PM is
designed to be a single-user monitor; however, PMs can be daisy-chained together, thus
supporting any number of users. When desired, the PM can produce a profile of tape
events, command the tape recorder to position to a desired location, and replay the data for
analysis.
Figure 3 is a block diagram of the PM. The enclosure is a standard 19 by 10.5 inch rack-
mount chassis. Mounted inside is a 15-slot VersaModule European (VME) chassis, 3
mass storage devices (floppy, 4.3 Gbyte hard drive, and 8mm tape drive) on an 8-bit
Small Computers System Interface (SCSI) bus, and a 5-port Ethernet hub.
ACQUISITON APPLICATION
SIGNAL INTERFACE PROCESSOR STRIP
MULTI-
RECORDER ASSEMBLY ASSEMBLIE(S) CHART(S)
CONDITIONING PLEXER
DAC
ETHERNET
TRANS- HUB
DUCERS
PRINTER
PANELS
Data conversion functions are implemented by recycling the input decoder design from the
ADAMS AIA. This new interface, referred to as the DEC 24 card (24-Bit Decoder Card),
has several added features for diagnostics and improved tape usage. We corrected
“features” in the original decoder design (e.g. violations of VME timing specification) and
took advantage of newer technologies to produce a front end that is faster than the original
design. The DEC 24 card receives real-time data in a “24-bit format” from a CMUX or
recorded data from a tape recorder, and performs synchronization functions, so that unique
data words can be identified. When the data source is the tape recorder, the DEC 24 card
also provides for time reconstruction of recorded data.
SIGNAL MULTI-
PRODUCTION STRIP
CONDITIONING PLEXER RECORDER
MONITOR CHART(S)
PRINTER
TRANS-
DUCERS
PANELS
Our recording format, the 24 bit format, as described in [1] identifies unique data words by
frame, time, and label synchronization. Frame synchronization is accomplished by
detection of a synchronization pattern every 513 twenty four bit words. Major Time
synchronization is accomplished by verifying the occurrence of Major Time words in the
data stream every 10 milliseconds, and string synchronization occurs by tracking minor-
time tagged, source-unique labels and their associated data words.
The DEC24 interface can handle a maximum of 42 Mbits/sec (1.75 Mwords/sec) of data
directly from the CMUX. With fill pattern turned on and all data words selected, the
maximum data rate is reduced to 23 Mbits/sec(.96 Mwords/sec). The interface produces
modulated IRIG B and IRIG H time code[2] from major time words that are embedded in
the data stream every 10 milliseconds.
DEC 24
ANALOG INTERFACE
OPERATOR INTERFACE
PROCESSOR
MEASUREMENT PROCESSOR
PROCESSOR
MODULE
XCVR
XCVR HUB TAPE FLOPPY HARD
DRIVE DRIVE DRIVE
The function of converting raw data into engineering units is accomplished on the MP, a
68060-based SBC. We estimate (based upon limited testing) that this SBC’s performance
will allow us to display 4 channels of data sampled at 200 times per second, processed as
Linear Single Section (a 1st order polynomial calibration) on the strip chart recorder. As
resources permit we intend to upgrade the hardware to meet our ultimate target of 4
channels at 6400 samples per second.
For peripheral communication, control and data display functions, three special interfaces
are provided. A Digital to Analog Converter is provided on the VME bus, giving up to
thirty-two channels of analog data for strip charts or modal analyzers display. There are 4
channels of RS 485 along with 4 channels of RS 232 provided on PC104 serial
communications cards. These serial cards are installed in an expansion assembly
connected to and controlled by the 80486/PC.
The enclosure minimizes EMI by inclusion of several features: cover fastener spacing,
EMI shielding air filters, ferrite cores (as required), and input power filtering. To ensure
adequate cooling the unit has three 120 cubic feet/minute fans providing an airflow of
greater 175 linear feet/minute per card. The internal components are shock mounted to
provide resistance to shock and vibration.
Data flow through the PM begins by the selection of data required to support the test plan.
The selected data set for the suite of tests is incorporated into set up files called a Request
for Instrumentation Pre-flight (RIP). The PM is “booted up,” set-up files are loaded, and
the acquisition system is initialized.
On initial power up, all the processor boards execute internal diagnostics. After the
Processor Module and MP diagnostics are complete, they monitor their serial ports a few
seconds for a keyboard key-press (in case debugging or diagnostics are being performed).
In the absence of a key press, the Processor Module then reads the VxWorks operating
system and PM Executive from the hard drive and starts execution. Once the Processor
Module is executing the executive, the other processors can communicate with it.
When the operator first enters a command, such as "QL," a series of actions takes place.
The job manager verifies that the application is available, loads it into memory from the
mass storage device, and schedules it for execution. The operator will then see a prompt
appear on the display command line. The operator may then specify a list containing
measurements he wishes to view. QL will request setup for the measurements on this list
from the measurement manager. The measurement manager requests measurement setup
information from the database manager. The database manager accesses the set-up files
and returns the information necessary to set-up processing paths.
The 24-Bit Decoder Module is initially set-up to pass or reject selected labels and
associated data. Data not rejected is passed to the MP over the Corebus interface. If the
operator is looking at recorded data (selected with application "SO"), the 24-Bit Decoder
re-constructs the time sequence of the data being replayed. When the MP receives data in
raw form, it converts and calibrates it into engineering units. The calibration of data is
accomplished by use of a singly linked list of algorithms. Once a datum is identified, it and
its processing algorithm starting address is passed to the data processing function. The
datum is then acted upon in accordance with the algorithm linked list. The result from this
process is up to five tagged (assigned to a unique address for storage) words written into
the CVT. These are raw counts, collected counts (syllabalized words), processed counts
(floating point representation of collected counts), engineering units (calibrated processed
counts), and a time word indicating the time of this datum’s processing completion. There
is a special class of parameters reserved for the strip chart. If any of these parameters was
set-up, the MP scales and converts the EU data word to a form suitable for the DAC, and
writes it to the analog interface directly.
After the processing paths are setup, the locations of the measurements in the CVT are
passed back to the QL application running on the Processor Module. As QL execution
cycles once per second, those pointers are used to read data from the CVT. The data is
then formatted for display, and transferred over Ethernet to the OIP, where it is output for
display via SuperVGA video output port.
Any number of applications (we have not yet determined the limit) can be executing at the
same time. The operator can be monitoring a list of measurements unique to each
application on different peripherals simultaneously. Unique sets of data being output to a
printer (“DO”), strip chart (“MT”), panel displays (“PA”), and the video monitor (“QL”)
are available to an operator in real-time. This allows a significant amount of flexibility in
the information (and its format) that is readily available during test conditions.
CONCLUSION
The PM airborne data monitor system allows Boeing to conduct flight tests anywhere in
the world, with on-site data monitoring and reduction. It is functionally identical to our
large certification data monitor system, ADAMS, but is small enough to be quickly
installed in production aircraft or a hotel room. PM is a conglomeration of architectures
(VME, PC,SCSI, and Ethernet), and operating systems (VxWorks, Linux, and proprietary)
integrated into a flight-worthy unit. This solution provides Boeing Flight Test with cross
system commonality while preserving the lower cost, smaller size, and flexibility required
of a portable monitor system.
ACKNOWLEDGMENTS
REFERENCES
1. Mills, Harry and Turver, Kim “24-Bit Flight Test Data Recording Format”, Proc. of the
International Telemetry Conference, Las Vegas, Nevada, November 1991, pp. 385 -389.
2. Telemetry Group Range Commanders Council, IRIG Standard 104, Range Commanders
Council U.S. Army White Sands Missile Range, New Mexico.
ABSTRACT
The monitoring of multi phase 400 Hz aircraft power includes monitoring the phase
voltages, currents, real powers, and frequency. This paper describes the design of a multi
channel card that uses digital signal processing (DSP) to measure these parameters on a
cycle by cycle basis. The card measures the average, peak, minimum cycle, and maximum
cycle values of these parameters.
KEY WORDS
INTRODUCTION
Most aircraft have multiple sources of Alternating Current (AC) power. These are usually
derived from generators running off of the aircraft's engines. While these generators
produce a nominal constant frequency that is usually 400 Hz, it can vary considerably.
This includes the DC to 400 Hz variation during engine turn on. Furthermore, the output of
one generator is not normally synchronized to any other generator. Because of all the
variables associated with AC power generation, it is often desirable to monitor the power
distribution during aircraft testing. One method would be to treat the AC signals (voltage
and current) as instantaneous signals to be recorded or transmitted. This method consumes
a large amount of bandwidth, with little information content. Another method is to measure
only certain parameters, or perhaps only the limits of certain parameters. This will
substantially reduce the bandwidth required to store or transmit the information. The latter
approach is what is used in this design.
REQUIREMENTS
The circuit card is required to monitor all three phases of a three phase generator. The
parameters being measured by this circuit are listed in Table one. Note that the minimum
and maximum cycle root mean square (RMS) values are available (voltage, current, and
power), which is not the case in some systems.
The AC input frequency can vary from 20 Hz to 1000 Hz. The sample period is also
variable. It can be either a function of the AC input signal or the telemetry format sample
rate.
DESIGN CONSIDERATIONS
SAMPLING
When the design was first considered, one approach was to sample at a fixed rate of 64
times the highest frequency (64 x 1000 Hz = 64 KSPS). This method would require that
the number of samples per cycle be measured. A sum of squares calculation would then be
made. Finally, the number of samples per cycle would be divided into the sum of squares.
The DSP allows up to 256 sums without overflow, therefore the input frequency would be
limited to 64/256 = 1/4 of the maximum rate of 1000 Hz, or 250 Hz, which was not
acceptable. Consequently it was decided to add a phase lock loop (PLL) to give a 64 times
clock from the input frequency to drive the Analog to Digital Converter (ADC).
The input AC signal applied to the first channel is conditioned to generate an interrupt to
the DSP during its -/+ zero crossing. The interrupt initiates the processing of the previous
64 samples of each of the 8 input channels. The samples are stored in a circular buffer in
the DSP. The DSP then performs a sum of squares calculation. The result is divided by 64
and then the square root is taken.
RMS = ((∑X2)/64)0.5
For the purpose of calculating RMS values, the only sampling requirement is that a
complete cycle of the input be captured. The absolute phase angle of the signal is
unimportant. However, the measurement must begin and end at the same phase angle. This
technique allows voltage and current inputs from a multi phase generator to be measured
using only one signal as a reference. The RMS power is calculated by taking the
instantaneous voltage and current {RMS = (∑VxI)/64}. This takes into account any phase
difference between the voltage and current, giving real power out.
SYNCHRONOUS SAMPLING
Voltage and current must be sampled at the same time in order for power calculations to
be correct. Calculated power is then the instantaneous voltage times the instantaneous
current. No regard need be given to the phase angles between the signals. To implement
this method of sampling, at least two separate sample and hold amplifiers that can maintain
accuracy over the wide sampling rate are needed. Alternatively, two ADC converters can
be used. For this design an ADC per channel was chosen. The eight converters are all
driven by the same control signals, so that they all sample at the same time. The outputs of
the converters are digitally multiplexed to the DSP, where they are saved into a circular
buffer. By using a circular buffer (as opposed to a linear buffer), only the buffer address
pointer value at the time of calculation interrupt needs to be saved. The buffer (which is set
to twice as long as the data needs) can continue to fill with new data as the old data is used
for calculations.
INTERNAL CALIBRATION
The card has the ability to automatically calibrate small gain and offset errors arising in
either the PCB's components or external errors. Calibration takes 64 samples of the input
and averages them to provide noise rejection. An error delta between the input signal and
the full scale test voltage output (as defined in the DSP program) is calculated. Calibration
is done with a precision DC reference input signal. The gain and zero calibration are not
updated in the electrically erasable programmable read only memory (EEPROM) unless
the signal values are within the tolerances set in the DSP program. This sets limits on how
far calibration will work before the DSP assumes that an incorrect signal was applied and
calibrating should not be done.
BOARD TEST
The board contains boundary scan logic for an on card digital field programmable gate
array (FPGA) and an equivalent port on the DSP, to ease board level test. All DSP
firmware resides in an EEPROM, that can be reprogrammed without removing it from the
PCB. This allows updates or reconfiguration without modifications of the hardware.
The ACP-108 is an eight channel power monitoring card for use in Aydin Vector's PCU-
8XX family of data acquisition systems. See Figure 2 for the ACP-108 block diagram.
Each channel consists of a resistor attenuator and high frequency filter, followed by a
resistor programmable gain instrumentation amplifier (IA). With no attenuation and unity
gain, the circuit gives an input of 20 volts peak to peak (Vpp) full scale. To increase the
range, reduce the gain with the attenuator resistors. {R = l00 KΩ∗
(attenuation/(1-attenuation))}. To decrease the range, increase the gain with the gain
resistor {R = 50 KΩ/(G1)}. Next is a 4 KHz anti-aliasing filter. Each channel has a
separate ADC. The ADCs convert simultaneously to give matching samples of voltage and
current for power calculations. Four channels are designated as voltage input and four
channels are designated as current input. The assignment is arbitrary except that the first
channel must maintain adequate signal level to operate the zero crossing detector used to
measure cycle length. Also, if using the power calculations, the corresponding Voltage (V)
& Current (I) pairs must be maintained. Frequency measurements are based on channel
#VA only. Also the sampling of all eight channels is based on channel #VA. For correct
calculations, all channels must be at the same frequency as channel #VA, however the
phases may be different. In this way the card can be used to monitor the outputs of a three
phase generator, with one spare pair of inputs. The card can also be used to monitor eight
voltages related to the same generator. Calculations are done by taking 64 samples of each
cycle. The card can lock on to frequencies between 20 Hz and 1000 Hz. Each cycle
sample timing is based on the previous cycles time. Therefore if the input frequency shifts,
there will be an error in the calculation due to sampling at the wrong rate and including (or
excluding) a portion of a cycle. This error is proportional to the cycle by cycle frequency
shift. For input frequencies outside of these ranges the sample rate will free run at about 16
SPS, with calculations on groups of 64 samples. In this way DC power can be monitored.
There are four types of measurements performed on each of the eight inputs. Peak signal
sample (absolute value), the highest cycle average, the lowest cycle average, and the
average of all cycles since the last reset. Averages are all RMS. The amount of cycles
averaged and monitored before a reset (called a block of data) is a function of the PCU-
8XX sample format. The reset point can be set as a function of the format. Reset occurs
each time the reset bit is present (normally once per major frame) or once every 4th, 16th,
or 64th time the reset bit is present. There is a limit of 256 maximum cycles per block of
data before data is overwritten. There is an output available that gives the number of
cycles used in the last data block calculation. The card can also be set to reset after each
16th, 64th, or 256th input frequency cycle, without regard to the PCU-8XX frame words.
Reset pulses may also be set to occur as often as every ACP-108 card output to the PCU-
8XX bus. Power calculations are done based on an instantaneous calculation of VxI, with
an RMS average per cycle and per block. There are also minimum and maximum cycle
values saved per block.
The frequency output is the block mean of each cycle time in that block, measured using a
one microsecond crystal clock time base. The output is a period measure in binary counts
of microseconds (16 bits maximum). If the input frequency drops below 20 Hz, the signal
output loses correlation to the input frequency. The card may be auto calibrated for zero
scale and full scale (7800/7FFF)x(+10 volts) = 9.375 VDC. The offset correction is done
on a sample point by point basis, while the gain correction is done to the final result of
each calculation.
The card outputs in a 16 bit per word format, however the MSBs can be read in a reduced
word length if desired. There is also a special "no response" code to allow the reading of
extended words in a reduced word format.
FIGURE 2
BLOCK DIAGRAM
CONCLUSION
The ACP-108 can be used to monitor 4 sets of voltage and current, eight voltages, one
voltage and seven currents, or any combination that has at least 1 voltage to drive the zero
crossing detector reliably. For power calculations, voltage and current must be paired. The
unit will work from 1000 Hz down to DC. However, below 16 Hz the card will no longer
synchronize to the AC signal, and should only be used for scalar measurements bellow 20
Hz. The eight inputs should be all be at exactly the same frequency, otherwise the
calculations will be in error.
Using traditional sampling of the 400 Hz signal for a pair of voltage and current results in
614 KB/Sec. (2Ch x 400Hz x 64SPCycle x 12BPW = 614,400BPS).
Using the ACP-108 card with processing on the card and 256 samples per average gives
only 225 B/Sec. (1Ch x 12readingsPCh x 400Hz x 12BPW/256CyPAv=225BPS). This
results in a bandwidth reduction of 2731:1, with very little loss of information.
REFERENCES
1 Babst, Jere, ed. Digital Signal Processing Applications Using the ADSP-2100
Family, Volume 2, Englewood Cliffs, NJ: Prentice-Hall, Inc., 1995
2 Mar, Amy, ed. Digital Signal Processing Applications Using the ADSP-2100
Family, Volume 1, Englewood Cliffs, NJ: Prentice-Hall, Inc., 1992
NEXT GENERATION ANTENNA CONTROLLERS
FOR THE NASA DRYDEN FLIGHT RESEARCH CENTER
ABSTRACT
Lower operating budgets and reduced personnel are causing the operators of test ranges to
consolidate their assets and seek ways to maximize their utilization. This paper presents
the versatile approach used by the NASA Dryden Flight Test Facility located at Edwards
Air Force Base to monitor, control and operate five of its diversely located telemetry
systems from a central control room. It describes a new generation of multi-purpose
antenna controllers which are currently being installed as part of this NASA upgrade
program.
KEYWORDS
INTRODUCTION
This paper presents the versatile approach used by the NASA Dryden Flight Test Facility
located at Edwards Air Force Base to monitor, control and operate five of its diversely
located telemetry systems from a central control room. It describes a new generation of
multi-purpose antenna controllers designed to respond to rapidly changing requirements as
their roles are expanding from the traditional role of strictly controlling antenna systems
with basic control functions and displays to the more complex task of controlling complete
telemetry stations with various types of telemetry equipment. They are also designed to
take advantage of the availability of more sophisticated and more user friendly
commercially available software packages which, when coupled with a multitude of
off-the-shelf components, have brought drastic changes in the implementation of ACUs
and their costs.
MISSION SCENARIO
The NASA Dryden Flight Research Center is currently operating a number of telemetry
receiving antenna systems which are either of the auto-track or of the manual/program
track variety. These systems are located at separate sites which are interconnected via an
Ethernet network (see Figure 1). The various systems are:
These systems have traditionally been operated independently and on a stand alone basis
through individual dedicated antenna controllers located in the vicinity of the system it
controls. The goal of this upgrade program was to modernize the antenna controllers at
each site and to do so in a manner which would not only retain the existing local control
capability but which would also permit real time remote control of any of these systems.
This real time remote control capability is such that any and all systems can be controlled
simultaneously from any of the sites or from a central computer located at the Aeronautical
Tracking Facility #1. In addition, this central computer is to provide slaving commands to
any of the systems as well as to perform other functions such as coordinate transformation,
data format transformation, etc.
SELECTED APPROACH
The selected approach for the design and implementation of the ACUs and their real time
remote capability is centered on a dual computer architecture which provides
uncompromised real time control and flexible networking or communications capability.
Figure 1
Control Considerations
The first aspect of the design approach is concerned with the control or the “user
interface” aspect of the ACUs which is intended to make the task of controlling any of the
five antenna systems and complete telemetry stations as easy and efficient as possible. To
that effect every ACU used in this program features a dedicated front panel and
incorporates the capability to support so-called remote front panels.
The dedicated front panel is based on Microsoft Windows and it operates just like any
Windows application. The different menus are used only during system setup and are
avoided completely when the ACU is used to conduct a mission. Any function of the ACU
is accessed through a key press and some front panel keys are dedicated to the ACU
functionality and are available at all times, even when the ACU window is covered
completely by another window, such as a maximized real time video window. This assures
that critical functions are instantaneously available at all times. The layout of the ACU
front panel window accommodates windows that may be placed on top of the window
stack but key pieces of information are located at the periphery of the window such that
they are visible even when another window (real time video, etc.) is active and on top of it.
Each ACU screen is touch sensitive so that buttons may be pressed without having to use
the trakpad device imbedded in the front panel. The touch panel feature is fully available
even when the ACU window is partially covered and any visible area that contains buttons
may be operated even though the ACU front panel may not be the top most application on
the screen. The trakpad device in combination with the touch screen, the dedicated and
programmable front panel keys, the handwheels and joystick are included to provide the
operator unsurpassed control flexibility.
The so-called remote front panel feature permits real time control and/or monitoring of any
of the five ACUs from any of the five sites. Operation of a remote front panel is identical
to that of any of the local front panels but with the added advantage that the operator can
be located at any of the five sites. The remote front panel utilizes the same software as the
local front panel and is also installed in the central computer. It can also be installed in
any computer connected to the Ethernet network.
This approach means that any ACU can be accessed from the front panel of any other
ACU a feature which is most useful when, in a multiple tracking systems environment, a
mission scenario calls for controlling and slaving a remote ACU to any other. A remote
ACU can be placed in the slave mode remotely from the front panel of a local ACU
without the operator having to be present at the remote site. This approach also means that
all sites can be monitored from the central site. Each ACU is capable of supporting several
remote front panels simultaneously of which only one is in control. This capability adds a
level of redundancy which may be required for some missions. In fact, since the front
panel software is designed to run on a number of standard computer platforms beside the
ACU’s, the Central Computer can be used not only to calculate parallax correction and
orbit predictions, but it can also be used to simultaneously monitor any of the ACUs
connected to the network. It is even possible for the Central Computer or any local ACU
to take control of any of the other ACUs in case of an emergency.
Communications Considerations
The second aspect of the design approach is concerned with the communications aspect of
the ACUs which is intended to make the task of communicating between the five ACUs as
flexible as possible to allow real time slaving and control of any ACU to or by any other
ACU. The capability to connect the ACUs to meet these requirements is divided into two
categories: one is the intra-rack communications capability and the other one is the long
distance communications capability.
The primary long distance communications capability is provided through the Ethernet
network but an RS-232C communications port is also provided.
Design Architecture
The design architecture of the ACUs must address the fact that in the current application a
dichotomy exists between the requirement to simultaneously perform critical tasks in real
time, such as auto-tracking, parallax correction, etc., and non time critical tasks such as
user interface and communications. The chosen architecture solves this problem by using a
dual computer configuration which by its very nature insures uncompromised real time
performance and solves potential communication and user interface problems. This
approach also allows the user to modify or customize both the user interface and/or the
communication facilities without impacting the real time performance of the controller.
This is achieved by using a general purpose operating system, such as Microsoft
Windows 95, to handle the user interface and much of the communication requirements
and by using a dedicated real time operating system to handle the real time control tasks.
Provisions are also included to allow the systems to be modified to use other general
purpose operating systems such as, IBM OS/2 and Sun Microsystem Solaris or any other
PC compatible operating system can also be provided.
This approach was selected above the monolithic or single computer configuration because
it is not subject to the limitations inherent in the latter. Typically the monolithic approach
uses the same general purpose operating system for both the real time and the user
interface tasks. While this approach is capable of credibly mimicking real time
performance under benign operational conditions it generally fails to perform real time
when communication and user interface related activities are increased. This may result in
loss of auto-tracking capability due to the fact that general purpose operating systems are
designed to handle user interface activities most efficiently to the detriment of “real time”
tasks. This type of design may not feature user configurable front panels or user
configurable communication functions because these changes may negatively affect the
real time performance of the controller as the number of system interrupts may increase
above an inherent limit which even priority based scheduling cannot compensate for.
ACU IMPLEMENTATION
The ACUs are implemented as rack mount units with dedicated front panels which include
a track pad device, a joystick, a set of handwheels and each one consists of a hardware
and of a software component.
Hardware Component
1. The real time sub-system with its own computer and associated I/O devices which
perform the real time tasks and part of the communication tasks.
2. The front panel assembly with its own computer which performs the user interface
tasks and the balance of the communication tasks.
Both computers are based on the IBM AT architecture and are of the 486 DX and
PENTIUM variety. The front panel assembly is centered around a high resolution color
display and incorporates the various switches, the handwheel assemblies, the joystick
assembly and the trakpad device. The display panel itself is a 9.5 inch (diagonal) active
matrix color LCD unit manufactured by Mitsubishi. The panel is 7.6 inch wide by 5.6 inch
high and is capable of displaying 512 colors with a resolution of 640 x 480 pixels (VGA).
It is a device capable of displaying real time video without any smearing and the viewing
angle is sufficient to permit uninhibited angle of view.
With the exception of the ON/OFF switch the complement of switches uses membrane
switches imbedded in a scratch-resistant polycarbonate panel and includes both dedicated
switches located at each side of the display and soft programmable switches located at the
lower edge of the panel. This layout coupled with the use of graphical screens and multiple
switch functions facilitates the customization of the user interface free of any constraints
imposed by dedicated hardware. Switch functions are software defined and future
expansion of the ACU is not limited by previously selected hardware interfaces.
Software Component
The software component includes the Real Time Component and the User Interface
Component.
The real time component is responsible for controlling all aspects of the pedestal and
antenna system including servo motion control, receiver interface, computation of
autotrack error, slaving to external sources, parallax correction, etc. These activities are
time critical and require a real time executive to guarantee that performance is never
compromised. The state machine which controls the mode changes and automatic
functions of the ACU is also implemented here. It contains a minimal set of user interface
support code since this functionality is delegated to the user interface component. It is
however responsible for interfacing with the handwheels, the joystick and the dedicated
front panel switches.
The user interface component is given the task of presenting a suitable graphics depiction
of the present state of the ACU on the front panel display and of allowing its user to affect
changes in its operational states. It provides a simple uncluttered man machine interface
for most mission scenarios to be supported by the equipment it controls. The user interface
component is based on Microsoft Windows 95. It utilizes Microsoft OLE Custom Controls
and Automation and relies on the WinSock Application Programming Interface (API) for
its network communication needs. The OLE Custom Controls and Automation library is
included within the ACUs in fully documented form for use in user created applications.
The user interface component includes the capability to control associated pieces of
equipment such as the telemetry receivers. This feature which is logically separate from
the ACU front panel software can be executed independently of the ACU front panel and
provides the ability to remotely configure and control the receivers at the different sites.
This feature is included because it is an integral part of the remote control operation of a
telemetry station.
The user interface component also includes the capability to display video signals and
process them to allow any form of customization. This includes the ability to
capture/digitize the video signals and otherwise manipulate them without any performance
penalty to the auto-track functionality provided by the real time computer.
Other than those capabilities mentioned earlier each of the five ACUs incorporates a
complete range of functions and displays which are typically found in antenna controllers.
They belong to one of the following categories and are too numerous to be listed here:
It should be noted that the availability of excess computing power at the user interface unit
allow the addition of features such as adaptive search mode which generates a search
pattern taking into account the recent behavior of the target and such as a visual
representation of the actual position of the target with regards to the antenna boresight.
CONCLUSION
It is now feasible to remotely control in real time a number of telemetry stations through
the use of antenna controllers which feature a dual CPU architecture. Implementation of
this technology allows the user to better plan each mission he is supporting, to better
allocate available man power and to provide redundant mission coverage with existing
assets.
ACKNOWLEDGEMENT
The authors wish to thank Mr. Huey Barr of the NASA Dryden Flight Test Facility at
Edwards Air Force Base, CA for his invaluable guidance and sponsorship of this program.
NEXT GENERATION
MOBILE TELEMETRY SYSTEM
ABSTRACT
White Sands Missile Range (WSMR) is developing a new transportable telemetry system
that consolidates various telemetry data collection functions currently being performed by
separate instrumentation. The new system will provide higher data rate handling capability,
reduced labor requirements, and more efficient operations support which will result in a
reduction of mission support costs. Seven new systems are planned for procurement
through Requirements Contracts. They will replace current mobile systems which are over
25 years old on a one-on-one basis. Regulation allows for a sixty-five percent overage on
the contract and WSMR plans to make this contract available for use by other Major
Range Test Facility Bases (MRTFBs). Separate line items in the contracts make it possible
to vary the design to meet a specific system configuration. This paper describes both
current and replacement mobile telemetry system
KEY WORDS
White Sands Missile Range (WSMR), Major Range Test Facility Base, TECOM,
downsizing, Virtual Proving Ground, TTAS, TTARS, TMV, suite, Requirements Contract,
sustainment, instrumentation modernization, next generation system, safari, benefits,
standardization.
INTRODUCTION
During the 60’s through the mid 80’s, national defense called for high budgeting outlays
which kept the weaponry machine well oiled. Test instrumentation utilized to support these
weapons justifiably received its fair share of this “oil”. Requirements for upgrading or
development of new range test instrumentation was viewed favorably by TECOM,
provided the justifications were well spelled out.
Now that the “cold war” is over and the perception that major threats don’t exist, the
defense budget for new weaponry has diminished, in turn, so have the budgets for test
support equipment development and sustainment. We have entered the new age, budget
cut-backs, downsizing, re-engineering, centralization, BRAC, you know the terms.
Competition for TECOM dwindling coffers for system upgrades or new hardware has
become fierce. To toughen the situation, Virtual Proving Ground (VPG) technology has
DoD’s focus and priority.
As a MRTFB, WSMR is regarded as one of the best in the world. Downsizing and
instrumentation budget cuts has set it back a bit logistically and technologically, especially
since the emphasis on VPG. However, range instrumentation managers will emphatically
remind all that modeling and simulation must be verified at test ranges where “the rubber
meets the road”. Virtual Proving Ground and field testing should work hand in hand to
maximize the accuracy of the models being developed. The more realistic the models are
to be, the more precise and refined the test data has to be. This means data collection
instrumentation must be capable of collecting data faster, more precisely, and make data
products available at quicker turn-around times. Hence, the test ranges need DoD support
on modernization programs that address range instrumentation sustainment and
modernization issues.
Locally, for several years now, Telemetry has been having its own competition scuffles for
higher priority ranking at WSMR to warrant TECOM instrumentation modernization
money. Finally, in FY95, Telemetry managed to take higher priority at WSMR. The
proposed plan called for the procurement of seven “next generation” mobile systems
through a five-year Requirements Contract to replace its 25 year-old mobile systems. The
following describes the requirements that drive this effort and provides an overview of the
current WSMR telemetry system followed by a description of the new mobile telemetry
system.
REQUIREMENTS
The Requirements that drive the replacement of the current mobile systems include
upgrading to current technology, increase in the data bit rate handling capability, increase
in the telemetry band, consolidation, and remote control capability. The current bit rate
capability is 11 mbs and needs to be raised to at least 20 mbs to be compatible with the
Telemetry Data Center at WSMR. There is a current requirement of 13 Mbs and higher
rates are expected in the near future. The Telemetry Band is up to 2400 MHZ; the current
multicouplers limit this range to 2300 MHZ. Downsizing and budget cuts also drive
requirements for efficiency through consolidation and remote controlling.
DESCRIPTION OF CURRENT SYSTEM
The WSMR Telemetry instrumentation complex consists of fixed and mobile systems.
Dispersed throughout the range, these systems rely mainly on Telemetry’s own microwave
systems to transport collected data to the final destination, the Telemetry Data Center
(TDC), for data handling, processing, and display. The fixed stations are fully equipped to
provide data acquisition, receiving and recording, and relaying. There are four fixed sites,
Jig-67, Jig- 10, Jig-56 and Jig-3. Jig-67 and Jig- 10 reside in the north portion of the Range
on top of Alamo and Salinas Peaks respectively 9000 feet above mean sea level. Dry Site
(Jig-56) is located in the south portion of the Range. The Jig-3 is next door to the TDC in
Building 300; it receives the relayed data from the fixed sites and interfaces it with the
TDC.
A major component of the WSMR Telemetry System is the microwave system. The
telemetry microwave system is known as the Telemetry Acquisition and Relay System
(TARS) and includes the TTARS as a subsystem. All TTARS subsystem transmit their
data to a corresponding fixed station according to the Operation Support Plan. The fixed
station in turn relays the TTARS data along with its data to Jig-3. Figure 1 depicts the
telemetry support methodology at WSMR.
CURRENT CAPABILITIES
The TTAS is an L and S band (1435 MHZ -2300 MHZ), dual axis automatic tracking
system with a single channel monopulse antenna feed on an eight-foot dish and has a
threshold of -120 dBm with a 100 KHz bandwidth. The Tracking acceleration and velocity
are 0-90 deg/Sec2 and 0-60 deg/sec respectively. It’s shelter contains two tracking
receivers and an analog control panel and test equipment. Both pedestal and shelter are
mounted on a 30-foot transportable trailer. Calibration and system set up and check out are
done manually. The TTARS is a self propelled van which basically contains a set of
telemetry data receivers and combiners to handle three telemetry downlinks and a 1970
vintage Collins 518 microwave system with a 6-foot dish antenna for data relaying.
Figure 2 is a photo of the TTAS and TTARS side by side as normally used.
FIGURE 1. TELEMETRY SUPPORT METHODOLOGY
The TMV is a versatile system (transported by fifth wheel) equipped with a low gain L and
S band (11 dB) antenna for use by itself when tracking is not required. It is used to receive
and record telemetry data on missile ground checks or live fire tests at missile launch sites.
It is equipped to receive, record, and display AGC and VCO data and has outputs for the
TTARS for data relay. One of the TMVs is also equipped with a High Density Data
Formatter for high bit rates recording. Strip chart recording includes direct write and
oscillograph. On-site analog and high data rate recordings are often required during
complex, long range missions and thus a TMV is employed with the TTAS and TTARS.
Figure 3 is that of a TMV.
FIGURE 2. TTARS AND TTAS
The new mobile system is based on current and future data requirements as well as on
plain common sense suggesting that the three distinct telemetry mobile functions be
combined for better efficiency. To this end the new mobile system is designed to be self
sufficient and requiring only two personnel to transport, setup, and operate it. The mobile
system’s all-weather proofing, ruggedness, and self sufficiency permit it to safari anywhere
in the world. An Automatic Test System provides automatic setup, system checkout,
remote control capability, and includes slaving to externally derived pointing data such as
from a radar or optical tracking system.
Procurement was structured around the incremental availability of funds and existing
contracts. For example, the contract for the pedestal system is a 5-year Requirements
Contract, the mover and the shelter are obtainable through existing GSA contracts, and the
receiving and relaying contract is set up to acquire set quantities of items (as funds permit).
The construct of the new mobile telemetry system can be divided into three parts. These
are the pedestal and antenna system, the shelter and mover, and the telemetry receiving
and relaying system. These are described below. A drawing of the mobile telemetry system
is shown in Figure 4.
The pedestal is an azimuth over elevation, digitally controlled, servo driven system. Even
though the tracking rates are compensated to 30 deg/sec and 30 deg/sec2 the pedestal is
capable of at least the times these rates. Seals and surface coatings are specified to protect
the pedestal from sand, dust, rust, and salty air. The antenna feed is an E-SCAN type with
three band pass filters of 1435 MHZ to 1540 MHZ, 1710 MHZ to 1850 MHZ, and 2200
MHZ to 2400 MHz for selectivity. The feed is mounted on an eight-foot solid dish. Wind
resistance was a concern with this type of antenna, but stowing , breaking, and drive motor
systems are specified to withstand the WSMR winds. The pedestal and antenna are
mounted on a trailer for independent handling. That is, after a mission, the trailer can be
temporarily left there for a later mission while the mover can transport the personnel back
to their duty station or relocate to support another mission that requires no tracking.
Additionally, maintenance can be performed on the pedestal and antenna system without
disabling the rest of the mobile system. A video camera on the antenna provides limited
visual data recording capability.
The shelter and mover were acquired by the Government and provided to a Small Business
Contractor who is also handling the Pedestal contract. The mover is a 4x4 truck with a
250 hp diesel engine, rear air suspension, 100 gallon fuel tank, and an automatic five speed
transmission. It carries a 35 KVA generator to power the pedestal and equipment inside
the shelter. The shelter is 20 feet long and is mounted permanently on the mover’s
extended support frame. It is equipped with a spiral/sinuous antenna installed on top; the
antenna is a dual circular, simultaneous RHCP and LHCP, and broad band with frequency
response from 2 to 18 GHz. Externally, it also has a telescoping mast with a six-foot
microwave dish antenna whose direction is controllable from the inside of the shelter. The
contract calls for a finished interior, electrical and lighting provisions, installed equipment
racks, and air conditioning. The shelter interior configuration is shown in Figure 5.
The Receiving and Relaying System combines the TMV and TTARS functions and
includes an auxiliary antenna for short range signals. Originally, a crossed dipole array,
single-axis automatic tracker was planned as the auxiliary antenna but, due to budget
constraints, it was opted to go with a spiral/sinuous antenna. Table 1 lists equipment
installed in the racks and includes that comprising the Receiving and Relaying System.
This equipment provides the capability to handle three telemetry downlinks, record analog
data on 14-channel magnetic tape and 16 channels on strip charts, record (cassette) digital
data up to 40 Mbs, and relay data via DS-3 microwave radios. System verification and
calibration can be performed manually or automatically with a Automatic Test System
(ATS) that interfaces with the pedestal digital servo control unit, the antenna feed systems
and multicouplers, and tracking receivers. System diagnostic functions are connected to a
spectrum analyzer, oscilloscope, digital oscillator, and power meter, via a Signal
Distribution Drawer controlled by a PC. The ATS includes a 9600 baud, RS 232C modem
for remote control. Figure 6 is a block diagram of the Receiving and Relaying System.
FIGURE 5. EQUIPMENT RACK CONFIGURATION IN SHELTER
CONCLUSION
Currently, for the most part, it appears that VPG is knocking the wind out of the Test and
Evaluation instrumentation modernization sails in TECOM. If the ranges must adhere to
downsizing and budget cut mandates and “do more with less”, funds must be appropriated
for the implementation of upgrades to produce the changes required to do this.
The WSMR Telemetry mobile systems are over 25 years old and in dire need of
replacement. Additionally, mission support requirements are slowly exceeding current
capabilities. Recently Telemetry was fortunate to receive funding for procurement of a new
Mobile Telemetry System. This mobile telemetry system satisfies current and future
support requirements and also is Telemetry’s response to the Government’s drive for
efficiency, “doing more with less”. By combining three separate telemetry sub-systems
that perform different functions into one, and by employing current technology, this system
provides various benefits that include reliability, automatic set up and checkout, faster
mission support turn-around times, less operators, remote control, and high bit rate
capability. Seven of these new systems are planned, subject to availability of funds.
Contracts have been structured to handle incremental funding and WSMR makes them
available to other test ranges. This works alone with the idea of maximizing
standardization of data collection instrumentation throughout the ranges.
FIGURE 6. RECEIVING AND RELAYING SYSTEM
LOW COST, HIGHLY TRANSPORTABLE, TELEMETRY
TRACKING SYSTEM FEATURING THE AUGUSTINE/SULLIVAN
DISTRIBUTION AND POLARIZATION, FREQUENCY AND
SPACE DIVERSITY
ABSTRACT
The tracking system is part of a telemetry ground station being developed for the UK
Ministry of Defence. The design objective is a self-contained transportable system for
field use in a vehicle or workshop environment, so that the system components are
required to be man portable. Comprehensive facilities are required for the reception,
display and analysis of telemetry data from a remote 1430-1450MHz airborne source at
ranges of up to 205km. Since tracking over water is a prime requirement the system
must accommodate severe multipath fading.
A detailed analysis of the link budget indicates that there is a major conflict between
cost, portability, antenna size and the receiver complexity required to achieve a
satisfactory performance margin. A baseline system is analysed using a four foot
antenna. Methods for improving the performance are then considered including
polarisation, frequency and space diversity coupled with alternative antenna types and
configurations.
The optimum solution utilises two six foot diameter shaped beam single axis antennas
of unique design in conjunction with a receiving system which economically combines
the elements of polarisation, frequency and space diversity.
KEY WORDS
The paper describes a new transportable telemetry ground station for the reception,
display and analysis of data from an airborne source.
Sullivan (1) has shown the advantages of diversity for tracking airborne vehicles.
Missile antennas characteristically exhibit varying polarisation with aspect angle.
Polarisation diversity is required to obtain optimum tracking performance. For tracking
aircraft through wide ranges of pitch and roll two antennas (of necessity spaced many
wavelengths apart) are required. Frequency diversity is then required for optimum
performance. Almost all tracking of missiles and aircraft involves low angle tracking for
most missions. Space diversity is required for optimum performance in a strong
multipath environment.
The Augustin/Sullivan Distribution (2) allows single axis tracking to be used for most
tracking applications, and at less than 50% of the cost of two axis tracking. Full
elevation (to zenith) coverage is provided with less than 1 dB loss compared with a
pencil beam antenna. A cosecant-squared distribution would only allow tracking to
elevation angles of 45 to 50 degrees and has a 3 dB loss from that of a pencil beam
antenna because of the beam shaping.
The complete telemetry ground station is a self contained, highly transportable system
which is designed to provide complete facilities for the reception, display, and analysis
of PCM data from a remote source at ranges up to 205 km. By using two 6 foot
diameter Augustin/Sullivan Distribution (ASD) shaped beam single axis antennas of
opposite circular polarisation and the minimum number of receivers and combiners to
achieve polarisation, frequency and space diversity. An optimum system has been
designed giving maximum antenna gain, continuous tracking from horizon to zenith,
and maximum range within the constraints of transportability and cost.
LINK BUDGET
The system must operate with a variety of modulation schemes and sources. The worst
case source provides 2 watts into a -7dBi transmitting antenna and requires 500kHz
bandwidth . The objective is to achieve a carrier to noise ratio of 12 dB at the FM
demodulator input of the receiver plus a fade margin allowance of at least 10dB
including tolerance to deep fades. The 12 dB ratio is the threshold for signal to noise
improvement from the FM demodulation process.
Performance figures for commercially available equipment are used in the following
analysis. The fade margin is discussed, and options for improving on this basic system
configuration are considered.
The transmission path for the signal is calculated from the output of the airborne
transmitter to the input of a low noise amplifier situated close to the receiving antenna
feed. Polarisation is essentially linear and of variable attitude.
Reasonable assumptions are made concerning transmitter feed loss and antenna
efficiency as follows:
Tx. o/p power at 2 W 33.0 dBm
Feed loss -0.5 dB
Tx. antenna efficiency -0.5 dB
Tx. antenna gain -7.0 dBi
Antenna gain for a 4 foot diameter pencil beam antenna is typically 22.2 dBi including
tracking loss. Cable attenuation from the antenna to the input of the low noise amplifier
(LNA) should not exceed 1dB. A circularly polarised receiving antenna is used to
ensure reception of the randomly oriented linearly polarised signal. This results in a
3dB polarisation loss.
Using the above figures the signal strength at the LNA input at maximum range is
calculated as follows:
EIRP 25.0 dBm
Total path losses at 205 km -142.4 dB
Cable loss to LNA -1.0 dB
Polarisation loss -3.0 dB
Net antenna gain 22.2 dBi
In calculating the System Effective Noise Power the following receiving system
components are considered:
Antenna Loss in LNA Loss in Rx
Cable Cable
o/p
The effective noise present at the LNA input is considered in two parts; noise
originating from items prior to this point, and noise originating from subsequent items.
For the system as a whole bandwidth is constant so that the noise power can be
normalised to absolute temperature which can then be summed arithmetically.
Noise TAnt at the LNA i/p from all items prior to the LNA i/p is given by:
where the TSky is the effective sky temperature, ambient temperature is Tamb and
cable attenuation ratio is α. A conservative figure for TSky is 160K for a single axis
tracking antenna looking horizontally.
Noise TL at the LNA input from items subsequent to the LNA input arises from the
LNA itself (gain GLNA and noise ratio NLNA), the receiver connecting cable
(attenuation ratio β) and the receiver (noise ratio NRx). Receiver gain is considered to
be sufficiently large for there to be no further noise contribution:
TSys = TAnt + TL
With a minimum signal strength of -99.2 dBm and a system effective noise power of -
117.2 dBm as calculated above, the carrier to noise ratio is 18.0 dB. This exceeds the
12 dB carrier to noise performance target by 6dB, but does not provide the required
10 dB fade margin. Also, the effects of deep fades particularly over water cannot be
ignored, and some means for improving the performance under these conditions is
needed.
FADING EFFECTS
The above calculations use worst case figures throughout so that the calculated Carrier
to Noise performance at maximum range should be comfortably achieved. However
these figures leave little margin for further signal loss which will occur due to fading
which results from interference between received signals arriving along different paths.
The two major causes are Atmospheric Multipath Fading and Reflection Multipath
Fading.
Reflection Multipath Fading occurs due to interference between direct and reflected
signals. When the difference in path length is λ/2, or 0.1 metres in this case, the
multipath fading becomes prevalent over water under calm conditions.
Under calm conditions and with horizontal polarisation the reflection coefficient of the
sea surface is close to -1 at all angles of incidence (Fig.1). This results in very deep
fades as can be seen from the calculated results with typical transmitter and receiver
altitudes of 25,000 and 500 feet respectively (Fig.2). As the sea state increases there is
a rapid reduction in the effective reflection coefficient which causes a corresponding
improvement in multipath fading (Fig.3).
METHODS FOR IMPROVING SYSTEM PERFORMANCE
With a nominal 6 dB fade margin the telemetry link is very susceptible to fading, and
would require careful and informed selection of operating conditions. A number of
methods are available for improving performance:
Polarisation Diversity. A loss of 3dB has been included in the above calculations to
allow for transmitter attitude and the need for a circularly polarised antenna. By
utilising parallel LHCP and RHCP feeds this 3 dB can be recovered.
Frequency Diversity. The user requirement for two frequency working effectively
provides frequency diversity.
Antenna gain. This increases with diameter broadly as follows in relation to the 4 foot
antenna:
6 ft 3.5 dB
8 ft 6.0 dB
10 ft 8.0 dB
20 ft 14 0 dB
An increase to 8 feet diameter is a major step with the dish alone weighing over 200 kg
and requiring a substantial pedestal, foundation, and separate radome. Also the narrow
beam width would be incompatible with single axis tracking.
THE OPTIMUM SOLUTION
Antenna diameters of four and six feet are considered to be the maximum for man
portability. Calculated performance at maximum range with increasing system
complexity is given below:
In practical terms a 10 dB fade margin should be the minimum objective, and appears
to be met by configurations 4, 6 ,7 and 8, and almost met by 5. However, the table is
inadequate in comparing the merits of polarisation and space diversity as regards
fading. Space diversity is effective in the presence of deep fades and is therefore
considerably better than polarisation diversity in maintaining an acceptable carrier to
noise ratio. Also, by utilising left hand polarisation in one antenna and right hand
polarisation in the other some of the merits of polarisation diversity are obtained at no
increase in cost or complexity.
A block diagram of the selected system (Fig.5) shows direct and multipath reception at
LHCP and RHCP antennas, followed by two receiver channels with full pre-detection
phase combining to recover telemetry signals A and B for processing by the monitoring
sub-system. A further combiner would be needed to provide frequency diversity, giving
a total of four receivers and three combiners. By comparison, full space, frequency and
polarisation diversity requires more complex antennas with a total of eight receivers and
seven diversity combiners. This is not considered to be cost effective for the
performance improvement gained.
SELECTION OF ANTENNA
In terms of both low cost and simplicity single axis tracking is the preferred approach
provided that the resulting limitations in elevation coverage and antenna gain can be
tolerated. Elevation coverage of 45 to 50 degrees may be obtained by the use of a
cosecant-squared distribution, but the consequent 3dB loss cannot be tolerated in this
application. In the late 1970s, Sullivan (3) devised a dual beam approach which
eliminated the gain reduction problem but was more costly since an additional low gain
broad beam antenna was required to provide elevation coverage. This adds cost and
complexity and is more susceptible to damage in the transportable role. In the present
application a durable arrangement is needed which retains antenna sensitivity.
A very clean 'clam shell' construction encloses the feeder and front face of the reflector
in an integral radome, so that a separate radome is not required. Two of the antennas
undergoing final test are shown in Figure 7.
PRACTICAL DETAILS
Each antenna is provided with a trailer both for transportation and as a stable easily
levelled platform for the antennas when deployed in the field.
The remainder of the system is housed in rugged transportation cases with integral
shock mounts for the constituent assemblies. These cases can be rapidly deployed in a
vehicle or building, and have detachable front and rear covers. A standard IBM
compatible PC in ruggedised format provides extensive 'Windows' based display and
monitoring capability. It also provides overall control and logging features for the
receiver system. Other features of the system as deployed will include magnetic tape
recording, spectrum display facilities, an off-air time standard, an oscilloscope and
chart recorder.
CONCLUSIONS
The design of a portable ground telemetry station has been realised using two antennas
with a minimum of four receivers and two diversity combiners. The system combines
space, frequency, and polarisation diversity to provide good performance while tracking
airborne sources at long range over water. When missions require tracking in the
presence of multipath, the system described provides:
• Optimum performance
• Maximum reliability
• Logistics simplification
• Lowest cost
ACKNOWLEDGEMENTS
REFERENCES
(1) Sullivan, Arthur, "Dual Beam Single Axis Tracking Antenna for Tracking
Telemetry Instrumented Airborne Vehicles", Proceedings of the International
Telemetering Conference, San Diego, California, 1979.
(2) Sullivan, Arthur and Augustin, Eugene, "Lowest Cost Alternative to Auto-
Tracking using GPS-Trak, Augustin/Sullivan Distribution, & Single Axis Antenna
Techniques", Proceedings of the European Test & Telemetry Conference, Arcachon,
France, 1993 and the International Telemetering Conference, Las Vegas, Navada,
1993.
(3) Sullivan, Arthur, "Dual Beam Single Axis Tracking Antenna for Tracking
Telemetry Instrumented Airborne Vehicles", Proceedings of the International
Telemetering Conference, San Diego, California, 1979.
Fig. 1. Reflection Coefficient of Calm Sea at 1.45GHZ.
ABSTRACT
This paper describes a new mobile self contained telemetry station designed for field
testing of air-to-ground weapons. The telemetry station makes creative use of existing
equipment and incorporates a unique dual axis tracking system to provide complete
coverage of most missions.
KEYWORDS
INTRODUCTION
This paper describes a new mobile self contained telemetry station designed to improve
field testing of air-to-ground weapons for the 86th Fighter Weapons Squadron (86 FWS) at
Eglin AFB, FL. The telemetry station has been designed and integrated using existing
telemetry equipment and a new unique dual axis tracking system featuring a low windload
FLAPS antenna. The telemetry station is capable of receiving, recording and displaying
missile telemetry data and video signals during most missions. It comprises a full
complement of telemetry equipment installed and integrated in a refurbished test control
and monitoring van. The antenna system is mounted on a 25 foot erectable pneumatic
mast installed at the front bulkhead of the van.
SYSTEM DESCRIPTION
A block diagram for the complete telemetry station is shown in Figure 1. The major
components are:
The telemetry tracking system design has been tailored for this specific application which
requires a high G/T with a minimum sized antenna. It is also designed for easy
deployment on an erectable tower and for convenient stowing in the van while providing
limited elevation travel and velocity. The system is shown in Figure 2 and is comprised of
the following sub assemblies:
An antenna reflector assembly consisting of a 4-piece, 6' x 6' FLAPS reflector illuminated
by a high efficiency tracking feed supplemented by a low gain acquisition aid antenna.
The tracking feed is a conical scanner which features a flared aperture for improved G/T.
Figure 2
Telemetry Tracking System
The acquisition aid antenna is a low gain four element array mounted behind the primary
feed and fed as a single channel monopulse array. The FLAPS antenna has been selected
for this application because of its lightweight/low windload properties which allow
operation on a 25 foot tower in wind velocities in excess of 70 mph.
An elevation over azimuth pedestal assembly consisting of a riser base assembly with a
manual reclining/stow mechanism, an azimuth rotator assembly and a limited motion
elevation positioner assembly. The azimuthal coverage is ±100E at a maximum velocity
rate of 40E/sec. Coverage in elevation is limited to the 0E to +70E sector at a maximum
velocity rate of 6E/sec.
A digital antenna control unit based on a dual CPU architecture and implemented in two
separate rack mounted units. Both units use standard AT compatible computers. One unit
is for real time computations and it interfaces directly with the servo amplifiers and the
telemetry receiver. The second unit is for the user interface functions and displays and is
centered around a 17" VGA monitor. It also includes a joystick.
TELEMETRY RECEIVER
The telemetry station is currently equipped with only one Microdyne telemetry receiver
Model 1200-MRC but has been designed to accommodate up to four receivers and two
combiners. This additional equipment will be added at a later date when funds become
available. The receiver in use features an RF tuner capable of operation in the 2200 to
2300 MHz frequency range and seven selectable second IF filters. The filters are
selectable at the front panel, the video and other signals are routed to a patch panel from
the back panel of the receiver.
Patch Panels
Four Trompeter patch panels are used to allow easy modification of the RF and video
configuration. One patch panel is located in the RF rack with the Microdyne receiver and
provides access to the many signals available at its rear panel. This patch panel is
designed to support two telemetry receivers and one combiner unit. The other three patch
panels are located in the operator's console and are used to control the routing of the PCM
and video signals. All patch panels and patch cords are 75 ohm impedance.
Demodulator
Distribution Amplifier
This general purpose amplifier is included to buffer the video, PCM, and IRIG-B time
signals and consists of four independent amplifiers each featuring one input and six
outputs. All distribution amplifier inputs and outputs are accessed via one of the patch
panels in the operator's console.
Bit Synchronizer
The telemetry station is also equipped with a high-performance bit synchronizer unit which
converts the analog PCM signal into digital data and clock signals. The bit synchronizer is
a Loral Instrumentation System model DB-530 and is capable of operating at rates up to
10 Megabits/sec. The bit synchronizer inputs and outputs are accessed via one of the
patch panels in the operator's console.
PCM Recorder
The PCM recorder is a Teac model RD-125T four millimeter Digital Audio Tape (DAT)
analog unit which is augmented with a specialized PCM input/output module to allow
recording of the PCM signals. This PCM module provides the Teac recorder with the
capability to record and reproduce up to five PCM data streams and is capable of handling
a combined data rate of 2.2 megabits/sec. The recorder audio/memo track is used to
record and reproduce an IRIG-B time track. All of the recorder inputs and outputs are
accessed via one of the patch panels in the operator's console.
Both IRIG-B and SMPTE time codes are generated by a master time clock based on the
Global Positioning Satellite (GPS) system. The codes are distributed throughout the van
via distribution/buffer amplifier channels. The time code generator uses a small GPS
antenna mounted on the roof of the van and features a front panel LED unit to display
current IRIG time.
Telemetry PC Workstation
Two high-performance Pentium workstations are installed in the telemetry station. Both
workstations are based on 90 MHz Pentium processors, use 16 Megabyte memories, 1
Gigabyte hard disk drives, SVGA graphic adapters, 4 PCI bus slots and are equipped with
17" high resolution rack mounted displays. The two workstations are networked via PCI-
based Ethernet adapters.
The first workstation functions primarily as the user interface portion of the antenna
control unit for real time operation of the dual axis telemetry tracking system.
The second workstation is primarily used as a PCM processing workstation. It is equipped
with a Loral PCM decommutation board and an IRIG-time decoder board. Together with
the Loral VTS-200 software these boards form a powerful PCM processing system and
provide the telemetry station with a PCM data quick look capability. The IRIG time
decoder is used to time tag the PCM data blocks. This workstation also contains a video
board capable of displaying live video in a window.
Video Systems
Two VHS video cassette recorders are used to record and store the video signals
transmitted by equipment on board the Maverick missile. Two 8" color video monitors
and four 8" B&W video monitors are used to display the video signals. These signals are
routed between the video recorders and the monitors via one of the operator's console
patch panels.
Communication Systems
Test control is supported with a Motorola URC-200 transceiver which provides both UHF
and VHF communications capability over a wide frequency range. It is coupled to an
audio amplifier to feed transceiver or VCR audio outputs to the speakers mounted at each
end of the van. The speaker volume levels can be controlled at the audio amplifier
controls and these signals are routed via one of the operator's console patch panels.
Van
The mobile shelter van is a refurbished surplus Ground Launch Cruise Missile Control van
which provides shelter for the telemetry equipment, power for its operation and a platform
for the dual axis telemetry tracking system. This phase of the project included:
1. Removal of old equipment including all original racks located on both sides of the
van. This process yielded greatly increased working space.
2. Rewiring of the primary power distribution network for 60 Hz A/C power to
accommodate the new diesel powered 60 Hz generator. This new generator has a
60 Kw capacity at 208 VAC, three phase power. It is powered by a turbo-charged
4 cylinder John Deere diesel engine and its fuel supply is stored both in the van
fuel tanks and in a 25 gallon tank located in the generator base. This new unit
replaced the original equipment 400 Hz turbine generator.
3. Conversion of the original 400 Hz heavy duty air conditioner unit for operation at
60 Hz AC. This conversion required replacement of two 7 Hp fan motors, of the
7 ton compressor unit and rewiring of the control unit to allow use of standard
commercial air conditioner control parts.
4. Installation of new walls with unistrut fixtures, of new carpeting and of a new
workstation table surface. Figure 3 shows the layout of the operator's console
rack.
5. Installation of an erectable antenna mast for the dual axis telemetry tracking
system. This mast is a 25 foot extendable unit which is raised or lowered using a
pneumatic system and a small air compressor mounted in the van next to the air
conditioner. Controls and gauges are mounted close to the mast to facilitate its
raising and lowering. The platform at the top of the mast is fitted with a
folding/stowing mechanism to secure the antenna pedestal during transportation;
the FLAPS antenna is removed and stored inside the van.
Figure 3
Operator's Console
CONCLUSION
This program has demonstrated the feasibility of fielding a self contained special purpose
mobile telemetry station at a reasonable cost. This was achieved through a flexible
implementation of a standard design which allowed for the use of existing telemetry
equipment and the incorporation of a unique dual axis telemetry tracking system.
ACKNOWLEDGEMENTS
The authors wish to thank Capt. Joe Barton of the 86th Fighter Weapons Squadron at
Eglin AFB, Florida for his invaluable guidance and sponsorship of this project.
CANISTER MULTIPATH AND THE CLOSE COUPLED ANTENNA
Juan M. Guadiana
Naval Surface Warfare Center
White Sands Missile Range, NM
(505)678-6359
Jesus Rivera
Russel Jedlicka
Physical Science Laboratory
New Mexico State University
(505)678-2682
ABSTRACT
The effects of multipath in telemetry applications are very well known and the approaches
to minimizing these effects are the subject of countless books, papers and articles.
Multipath once again rears its head as the U.S. Navy fields the MK-41 Vertical Launching
System (VLS), a launching system in which each missile is housed in a canister which is
both magazine and launch mechanism. The Canister is designed to protect the missile from
Electro Magnetic Interference (EMI), Radio Frequency Interference (RFI) and the
environment. As can be expected, a canister designed to prevent Radio Frequency (RF)
energy from entering should inherently prevent any RF from escaping, and renders the
canister environment ripe with multipath. Pre-Launch telemetry checks, essential to the
conduct of a missile flight test, become unreliable events which at times result in aborted
missions.
Today the “encanistered” missile system enjoys wide acceptance, in the U.S. as well as
internationally. Since any missile radiating in a closed volume inherently suffers from these
multipath degradations, it is important to disclose the results of Navy testing conducted on
the canister as well as the mission observations of the multipath effects. The mission
observations are described are “signature” traits of the degradations which should have
been attributed to multipath. Clearly many missions and tests were affected, but most were
simply ignored by an oblivious test team. A short summary of the canister multipath
investigation follows,including unexpected findings, and finally a discussion is given on the
Close Coupled Antenna and its effectiveness in mitigating the canister multipath.
BACKGROUND
CANISTER CORRUGATIONS
TREATED
FORWARD AFT CLOSURE
CLOSURE
DUAL PARASITIC
ANTENNA
In 1993 a Standard Missile Production Transition Round (PTR) would abort with nearly
the same signature as the earlier PSR. However, as soon as the missile was safed by the
launcher, the multipath disappeared and satisfactory data processing resumed. The
multipath appeared to correlate to the arming device in the canister which physically armed
the rocket motor at T-8 seconds and safed the missile afterward. The following day the
phenomena was repeated with a test missile, upon power application, the telemetry signal
was unusable, with plenty of signal strength but a severe null was observed in the middle
of the carrier spectra (see figure 2). At rocket motor arm the null would disappear allowing
normal synchronization and processing and at motor safe the signal would degrade again.
Working from a fax of the spectra, Gene Law, at the Pacific Missile Test Center,
reproduced the spectra in the lab, on the computer determining that the canister/missile
system was acting like a very high Q notch filter. The canister multipath was vulnerable to
minute changes in the canister volume, changes in electrical paths and ground planes as
well as moving missile or canister parts.
Using a network analyzer the first measurements were made on a MK-13 Canister at Port
Hueneme, Ca. The analyzer was swept from 2200 to 2300 MHz to disclose the canister
characteristics in the telemetry band. The missile omni and directional antenna as well as
the canister antenna were connected to the network analyzer. This configuration is shown
in Figure 3. The analyzer plots (Figure 6) clearly depicts the hostile environment inside the
canister, quite a few nulls, some quite deep. The actual environment is likely worse as
these measurements are affected by the test equipment dynamic range and scan rate.
Network
Analyzer
System
Plotter
In hopes of finding a solution, the Canister Multipath Team enthusiastically took a few
stabs at attempting to determine what affects multipath. Radio Frequency Absorbent
Material (RAM) was placed on many locations in the canister with little effect. The
absorbent material appeared to be effective when covering the forward closure, but then
the signal at the antenna disappeared, apparently the acquisition of the signal itself is
dependent on the longitudinal multipath. Several measurements were made with several
sophisticated antennas, most of which performed poorly. An eight element array was
tested in the hopes that the multipath components would simply average out resulting in an
improved signal, but little improvement was observed. Measurements were made with
various combinations of Absorbent Material and Antenna locations. It was disconcerting
that little improvement could be made over an antenna that is simply a piece of brass
brazing rod soldered to an “N” connector center pin. One monopole antenna was removed
from the canister and taped on a broomstick so that it could be placed in close proximity
(approximately 1”) to one missile antenna pad. The perturbations in the spectra leveled out
dramatically, indicating that coupling was taking place.
Endfiring Stripline 2î
Machined
Aluminum
Mount
CCA Side View CCA End
to provide an interesting insight into the nature of the canister multipath. It became
apparent to the Multipath team that there was a Bandwidth ceiling for the MK-13 canister
and a planned test utilizing a 9.6 Mbit/sec data rate would likely fail if the dual antenna
approach alone was used. Since the nulls appeared at less than 10 MHz intervals nulling at
both antennas was expected. This would have prevented an important planned flight test,
but there was enough time to design a close coupled antenna which functioned well. This
approach selected places a strip line monopole within the close coupling region of the
missile antenna radiating ports, ironically, the paramount issues were mechanical. The
prototype CCA assembly (shown in Figure 5) could not interfere with the missile egress in
any way and the assembly had to withstand the launch environment intact. This CCA
suffers from its close proximity to the missile guide rail but still delivers flatness within 7
dB as shown in Figure 6. The acquired telemetry signal was readily reconstructed by the
SPV resulting in spectra identical to the nominal.
FIGURE 6 CCA VS MONOPOLE NETWORK ANALYZER COMPARISON.
CONCLUSIONS
The ideal solution to this problem is to simply hardwire this signal out of the canister, but
is generally a long term solution. The CCA provides short term relief to this agravating
problem. The CCA shown was employed successfully on wide band telemetered rounds
flown at White Sands as well as Pacific Missile Range, Hawaii in 1996. The CCA is
transparent in operation and requires no additional equipment or special consideration. Its
acquisition is also stable and not subject changes in temperature etc. as would be expected
from the coupling. If positioned improperly still functions the same as the monopole.
Usage with AM Detection Aquisition methods further improve acquisition reliability as
well.
ACKNOWLEDGEMENTS
ABSTRACT
Generally, to meet the Telemetry and Tracking functions in space probes, RF packages are
realised using dedicated circuit configurations and different building blocks. While this
approach is warranted for certain Space missions, for some Space programmes, which are
basically Technology demonstrators and where the main emphasis is on higher flexibility
with minimal complexity - usage of multifunction RF modules (MFRM), would be highly
avantageous.
The MFRM, which can be considered as a RF package, has a flexible configuration and is
built around Common basic building blocks like broadband MMIC, wide band amplifiers,
switches, Dielectric Resonant Oscillators (DRO), Numerically Controlled Oscillators
(NCO), etc. It also has a Microcontroller, whose function is to select the required
configuration and make necessary interconnections between the building blocks, so as to
achieve a specific end function, based on the pre set commands from system designer. The
commands can either be preprogrammed or they can be through uplink Telecommand
signals from the ground stations.
A brief outline of the results of the proto unit of a MFRM which can be configured for
different end RF functions, through a microcontroller is presented in the paper.
The advent of Cellular Communication has given a tremendous boost to the components
industry and the main focus now is to develop and deliver highly integrated device designs
to the user. The present trend is to incorporate more RF functions on to the same piece of
semiconductor, silicon or Gallium Arsenide. Already, integrated power amplifiers/switches
and transreceiver multifunction MMICs available.
With such a backing from the component Industry, it is all but natural for the present
system designer, to concentrate his efforts on achieving maximum benefits with minimum
complexity. Exploitation of the Programmable features of the state-of-the art devices
available in the market today is one way to achieve this objective.
Instead of having dedicated circuit configurations and different building blocks for the RF
packages as in earlier days, the present technology provides an opportunity to have a
flexible, general configuration, centered around common wideband building blocks and
then select the required configuration to achieve a specific end function of the RF package.
One typical area, where MFRM packages score over conventional packages, is in the
Microsat missions in LEO, which are used for scientific purposes and other Technology
demonstrators. The very nature of the Microsat precludes the luxary of the different,
dedicated RF package usage. Instead a package with multifunciton capabilities would
allow the end user to achieve his mission objectives at reduced complexities and in a
shorter time.
MFRM:
The merits of the MFRM approach can better be understood by comparing it with the
conventional schemes followed for the realization of Telemetry Transmitter and a CW
beacon used for tracking purpose.
Conventional:
Fig(l) shows the simplified block diagram of a Microwave telemetry transmitter. Here the
final carrier frequency is generated from a VHF crystal oscillator, with successive
multiplications and filtering. The carrier frequency is modulated by the base band in a
modulator, and then amplified by a power amplifier and routed to the antenna through an
isolator/circulator.
For a CW beacon, the circuit configuration is similar to that of the Telemetry transmitter,
except that no base band modulation is employed.
MFRM:
The MFRM scheme adopted in the proto unit is shown in a simplified manner in fig(2). It’s
main building blocks are as follows.
.DRO: The Dielectric Resonator Oscillator, which operates in the fundamental mode at
Microwave frequency with the required stability.
.Broad Band switches: They direct the signal routings between the selected input/output
blocks, depending upon the commands from the controller. Also they function as
modulators.
.Power Amplifiers: They provide the required power output levels. Also they are capable
of being operated in either CW mode or pulsed mode, through appropriate control logic
interface.
.Controller:
All the flexibility/multifunction capability needed for the package is provided by the
Controller, whose requirements are:
.End Function selection
.Activation of the appropriate blocks needed for the selected end function.
.Routing of the signals in the required fashion.
.Operation in a preset programmed mode or through uplink Telecommand signals from the
ground stations in real time.
.Self check capability
.Abort mode in case of any abnormality in the package behaviour.
.Other requirements as desired by the mission.
In the Proto unit, 8031 Microcontroller has been used and the software is written in
assembly language. The following are the multifunctions realized by the Proto unit.
1. Transponder mode(pulse)
2. CW beacon.
3. Telemetry Transmitter.
The activated elements for the specific function are shown by the shaded blocks and the
results are given in table 1.
The input detected pulse shaping, pulse modulator ON/OFF timings are controlled by the
controller through appropriate logic interfaces. The activated blocks for transponder mode
are shown in fig(3).
SUMMARY
With the satisfactory performance of the MFRM proto, modifications are being planned, to
incorporate a Direct Digital Synthesizer (DDS) module in the unit, to act as the basic
frequency generating source, with its attendant advantages.
It should be mentioned here that, the significant progress made in the field of
miniaturization of devices (like chip form), also should favour the MFRM users, in keeping
the size and weight of the packages to smaller levels.
CONCLUSIONS
ACKNOWLEDGEMENTS
The authors wish to thank Dr. Srinivasan, Director, VSSC (ISRO), Trivandrum, for giving
his consent to publish this paper. The authors, also acknowledge the encouragement and
support given by Mr. U.S. Singh, GD, ELG, VSSC, Mr. N.K. Aggarwal, Head, ELMD,
VSSC, and Mr. R.N. Singh, Dy.Head, ELMD, VSSC in carrying out the MFRM
developmental work.
REFERENCES
(1) Ron Schrieiderman; Silicon or Ga As’? Who is winning the wireless war?;
Microwaves &R-F, Vol 33, No10, October 1994.
(2) David B. chester, Single chip Digital Down Converter Simplifies RF DSP applications;
RF design, November 1992.
(3) J. Feustel - Buechl; Flight Opportunities for small payloads, ESA Bulletin, No 60,
November 1989.
Table 1
For the proto unit, the following are the main results under various modes
1.CW Beacon
2.Transponder Mode
Sensitivity - 70 dbm
Prf 1171 Hz
3.Telemetry Transmitter
Power + 33 dbm
Modulation PCM
ABSTRACT
Modern High Density Digital Recorders (HDDR) are ideal devices for the storage of large
amounts of digital and/or wideband analog data. Ruggedized versions of these recorders
are currently available and are supporting many military and commercial flight test
applications. However, in certain cases, the storage format becomes very critical, e.g.,
when a large number of data types are involved, or when channel-to-channel correlation is
critical, or when the original data source must be accurately recreated during post mission
analysis. A properly designed storage format will not only preserve data quality, but will
yield the maximum storage capacity and record time for any given recorder family or data
type.
This paper describes a multiplex/demultiplex technique that formats multiple high speed
data sources into a single, common format for recording. The method is compatible with
many popular commercial recorder standards such as DCRsi, VLDS, and DLT. Types of
input data typically include PCM, wideband analog data, video, aircraft data buses,
avionics, voice, time code, and many others. The described method preserves tight data
correlation with minimal data overhead.
The described technique supports full reconstruction of the original input signals during
data playback. Output data correlation across channels is preserved for all types of data
inputs. Simultaneous real-time data recording and reconstruction are also supported.
KEY WORDS
For these reasons, a development program was initiated by Aydin Vector Division and
Calculex, Inc. The main goal of the program was to implement the features needed for
improvement while maintaining the baseline features of the existing ARMOR format. An
overview of the specific development goals are shown below:
! Establish a common recording format which works well with different recording
equipment and technologies at data rates up to 240 Mbps.
! Accommodate a wide variety of data types within a single format including the
ability to efficiently record low rate data and high rate data during the same
recording session.
The CTL controller card provides the interface between the MiniARMOR-700 system and
the digital recorder. Each MiniARMOR-700 system is configured with one or two CTL
cards based on the type and number of recorders in use. In the case of multiple recorders
(used for tape duplication and/or extended cascade recording), both recorders are assumed
to be of the same type. The CTL card provides the data transfer to/from the recorder, and
provides a serial port for communication with a host PC or with an external control panel.
In addition, some recorders (such as the DCRsi) also require a serial command port which
is also provided by the CTL card.
The MiniARMOR input cards acquire a wide variety of asynchronous digital and analog
data types. Multiple cards of the same type can be placed within a single MiniARMOR-
700 chassis depending on the number of slots available. Cards are available for the
following types of data:
! High speed serial data with synchronous clock (PCM, burst, etc.)
! High speed serial data without clock (Bit Sync option)
! Avionics data buses
! Wideband analog at up to 1 MSPS per channel (provides presample filtering)
! Video Codec for NTSC, PAL, or S-video formats
! Parallel digital data
Channel-to-channel time correlation
Generally, the MiniARMOR Data Format takes two forms. The physical format describes
the information as it is related to a particular type of recording equipment and takes into
consideration the media and the recorder's motion control system. This paper does not
address the physical format except to say that the MiniARMOR format is compatible with
most types of high density digital recorders including DCRsi, VLDS, DLT, and others.
The logical format is only relative to the structure and content of the information stored by
the recorder without regard to the physical format in use. This paper addresses the
MiniARMOR logical format.
Format Architecture
Two main divisions exist within the MiniARMOR Data Format; namely, the Recorded
Header and the Data Block. The first time a recording session begins, a Recorded Header
is recorded followed by a succession of Data Blocks. If the recording session is interrupted
(RECORD to STOP to RECORD), a new Recorded Header is recorded. This architecture
minimizes the number of Recorded Headers and improves the overall recording efficiency.
The Recorded Header contains all overhead information related to the Data Blocks
recorded on tape. See figure 1. These elements are itemized below and discussed in detail
in the sections that follow:
! Sync Block information is provided to support high speed data search and retrieval.
! A complete definition of the hardware configuration used to condition and record
the data is placed in the Header to support automatic hardware configuration during
data playback.
! Individual channel setup parameters are defined including gains, offsets, and other
information needed during post-mission analysis.
! The name of the file and the date of the recording is included in the Header for
archiving.
Sync Block
General Setup
Controller Setup
Channel 1 Setup
-
-
-
-
-
-
-
Channel n Setup
File Name / Date
Format Scan List
Checksum
Sync Block
For proper configuration of the hardware during playback, one needs to know the recorded
configuration which is either stored on a disk or as part of the Recorded Header. To
retrieve the configuration from the Recorded Header, a processor must be involved in the
loop to interpret the information. However, the location of the Recorded Header should be
marked in such a way that it is independent from the physical format. This in turn, requires
logical record marking to identify the beginning of each recording session. The approach
used here is to mark a Recorded Header with a sync block that a hardware state machine
can detect without the need for a general purpose processor. This method speeds up the
search of the Recorded Header regardless of the recorder type.
Header Setup
To optimize the recording session, the approach used places all information pertaining to
the configuration of the hardware, and its settings, only once per recording session. The
rest of the record session is dedicated to actual data. The information in the header file
must include detailed configuration of each card and each channel used. This information
is used during playback to reconfigure the playback hardware down to the channel level.
This file includes serial number of every card used, location of every card in the chassis,
configuration of each channel, and the format structure. Using the header, the user can
identify the complete configuration of the recording hardware, and the structure of the data
blocks.
General/Controller Setup
The Recorded Header contains the general setup used during the recording session
associated with the system as a whole. The information in this setup includes software
version, clock information, chassis size (-505, -507, ...), time code mode (IRIG A, B, or
G), aggregate bit rate of the system, etc.
The header setup also contains the controller setup used to identify the type of tape
controller in the MiniARMOR. The controller card provides the interface between the
MiniARMOR system and the digital recorder. The MiniARMOR system must be
configured with one of several types of controller cards based on the type of recorder used,
i.e. AMPEX3 DCRsi, Metrum4 VLDS-B, Lockheed Martin5 VLDS-BR, SCSI, etc. In some
cases, in order to extend the recording capacity of the system, two recorders are used. This
requires the system to be configured with two controller cards (except for the SCSI
controller which can interface with up to 7 recorders).
The controller setup holds the information defining Master vs. Slave, type of
recorder/controller used, the compile mode used (enhanced vs. enhanced compatible),
communication baud rate, and SCSI drive identifier (primary vs. secondary).
Channel Setup
The Channel Setup is an extensive description of the signal acquisition hardware used to
condition each input channel prior to recording. This information includes the channel
type, bit rate or sample rate of the channel, samples per frame, filter settings, offset
settings, channel number within a card, card location in the chassis, and the card serial
number. The information provided for each channel may vary slightly based on the type of
channel (analog, digital, communication bus, etc.).
This setup information is used to identify the hardware used during recording, and to
automatically setup the playback output channel in order to correctly reproduce the
original signal.
The Format Scan List describes the data format of the data block used during the recording
session. This format is sufficiently detailed to identify each channel data block within the
overall data block. A simplified view of the format is provided in figure 2. The format is
loaded to the controller card (CTL) as part of the header, and is loaded to the Multiplexer/
Demultiplexer ( MDX ) card for execution of the format during record and playback
sessions. The format In the Header is used during playback to reconstruct the recorded
data.
The first section of the format contains overhead pertaining to the actual format. This
includes synchronization words (32-bit Barker code), frame time words, a CRC word, and
fill words when needed to meet system clock requirements. Every word in the format has
an associated data source tag (record), data destination tag (playback), and word width
(bits per word). The source tag identifies the input channel, and the destination tag
identifies the data destination channel. By manipulating the data destination tag, one could
map a compatible output channel to an input channel. The synchronization words, frame
time words, CRC word, and fill words are considered as overhead and are therefore
generated by the MDX formatter.
Figure 2. MiniARMOR Format Structure
Frame Overhead
Ch 1 Overhead
Channel 1
Ch 2 Overhead
Channel 2
!
!
!
Ch n Overhead
Channel n
The second section of the format contains channel data block with consecutive samples of
that channel. Due to the wide verity of data types ( i.e. PCM, Video, Communication
Busses, and Analog ) it was decided that a channel may have 8, 12, 16, 20, and 24 bits
per sample based on the channel type at recording rate of up to 107 Mbps. At recording
rate above 107 Mbps samples with resolution of 8 and 12 bits per sample are packed as
two samples per word. The advantage of this method is to minimize the word rate in the
system bus to 16 Mwps even if the recording rate is 240 Mbps. A channel data block
includes both channel overhead and data. The overhead includes sufficient information for
the output reconstructor to reliably reconstruct the input data. For example, in the case of a
PCM channel data, the overhead of the channel specifies the number of valid data bits to
follow in that frame.
SOFTWARE
In the Play mode, there are two sets of interactive databases. The first database is
associated with the recording hardware configuration. In many cases, the hardware
configuration used during recording is different from the configuration used in playback.
During the Playback mode, the Recorded Header is read by the software either from a
disk, or directly from the Recorded Header. The Recorded Header configuration is virtual
and can be viewed using the software. Changes to this configuration are not allowed since
it reflects the hardware configuration used during recording, and its format. The software
allows the user to view this information in detail.
The second database is associated with the hardware playback configuration. For the
Playback setup, the output channels are mapped to the input channels used during Record.
In this case too, the software scans the MiniARMOR hardware used for playback to insure
than any hardware configuration discrepancies are brought to the attention of the user. The
output channel configuration assumes the configuration of the input channel mapped to it.
The user can determine the input-to-output channel mapping through either a default
setting (automatic) or manual setting (override).
CONCLUSION
This paper described a highly efficient recording format which is independent of the
recorder type. It accommodates a wide variety of data types during recording and data
reconstruction during recording (real time) and playback (post mission).
2. Berdugo, Albert, "Miniature Asynchronous Real Time Multiplexer & Output Reconstructor
(MiniARMOR)", European Telemetry Conference 1996, Garmisch-Partenkirchen, Germany, May, 1996.
4. Metrum, Inc.
ABSTRACT
The ability to acquire real-time video from flight test platforms is becoming an
important requirement in many test programs. Video is often required to give the
flight test engineers a view of critical events during a test such as instrumentation
performance or weapons separation. Digital video systems are required because
they allow encryption of the video information during transmission. This paper
describes a Digital Video Telemetry System that uses improved video
compression techniques which typically offer at least a 10:1 improvement in image
quality over currently used techniques. This improvement is the result of inter-
frame coding and motion compensation which other systems do not use. Better
quality video at the same bit rate, or the same quality video at a lower bit rate is
achieved. The Digital Video Telemetry System also provides for multiplexing the
video information with other telemetered data prior to encryption.
INTRODUCTION
Aydin Vector has formed an alliance with Delta Information Systems (Delta) to
develop an enhanced airborne compressed digital video encoder and decoder.
Delta is an internationally recognized leader in both the development of video
compression standards and their implementation in both hardware and software.
After careful consideration of the requirements and possible solutions, a system
implementation has been developed which applies advanced video compression
standards to airborne video data links. Aydin now offers a series of standard
product modules which incorporate compressed digital video into time division
multiplexed data links.
BACKGROUND
The inclusion of video and encrypted video into a telemetry data link poses some
unique challenges to the system designer. An analog video signal of 4 MHz band-
width could be included on a data link sub-carrier; but, if the signal requires
encryption, it must be digitized. A monochrome video signal digitized with a
resolution of 640 pixels / line, 480 lines / frame, 8 bits / picture element, at a rate
of 30 frames / second results in a transmitted bit rate in excess of 73 Mbps. A
color video signal at the same resolution and frame rate may require two to three
times that bit rate. It is a direct result of the enormity of numbers such as this that
video data compression techniques have been developed.
Several video encoding methods have been explored. In 1993 the U.S. Range
Commanders Council introduced IRIG-STD-210-93 which describes a method of
encoding a RS-170 black and white video stream using Differential Pulse Code
Modulation (DPCM) coding. This is strictly an intra-frame coding technique where
each frame of digitized video information is compression coded independently of
other frames. No advantage is taken from the fact that the differences between
successive frames may be small. The IRIG-STD-210 technique results in an
average of 3 bits per pixel and a transmitted bit rate, for similar conditions as
described above, between 10 Mbps and 28 Mbps depending upon the extent of
entropy coding.
The telemetry data links on the national test ranges are typically limited to
handling data rates up to 5 Mbps. Although there is a move to upgrade these
capabilities to 10 Mbps and even higher, one can see that bit rate (bandwidth) is a
premium commodity. Unless video is the primary information source, it takes a
back seat to the "measurement" telemetry. The goal is to transmit more with less.
Multiplexing compressed video with measurement telemetry in a single link
provides significant advantages in bandwidth utilization, and telemetry system
simplification.
V IDEO
SW DCT QUANT HUFF
OUT
V IDEO
SW IDCT IQUANT
IN
MC
SW - Sw i t c h
current frame and the predicted frame is then taken. The DCT is applied to this
difference frame. The decoded previous frame is used instead of the actual
previous frame in order to remove any quantization error that may accumulate in
the decoder. Typically a large portion of the frame has no change, so this
difference approaches zero.
In areas of the frame where there is motion, the frame-to-frame difference may be
large. In these areas Motion Compensation is used to improve the predicted frame
prior to calculating the frame differences. Motion Compensation relies on the fact
that from frame to frame much of the scene does not change in content. The
position of objects may change, but the significant content is similar. This process
compares an area of the current frame with an offset area of the previous frame.
The offset area is the frame location from which an object has moved. A motion
vector is used to indicate the changed position of the object and replaces the
transform coefficients thereby significantly reducing the number of bits required for
transmission. This technique improves the inter-frame performance by accounting
for the motion of objects from frame to frame. By accounting for motion, the frame
differences are reduced allowing less information to be sent to represent each
new video frame.
Forward Error Correction is also applied to the compressed data stream in order
to reduce the effects of transmission errors. A 511,493 BCH code is used which
provides 18 parity bits for every 493 data bits. This results in a 3.7% overhead but
provides for correction of up to two random errors per block thus supporting bit
error rates as high as 4 in 1000. Other methods are optionally available. The most
common being the use of a Viterbi Encoder. Although the Viterbi encoding
method is highly robust (up to 6 dB gain can be realized) it also requires more
transmission bandwidth. The most commonly used Viterbi encoder uses a
constraint length of 7 and rate 1/2. This requires an output bandwidth double that
of the un-encoded bit stream.
The result is that the H.261 algorithm provides significant compression gains over
the DPCM and motion JPEG algorithms without the high data latency associated
with the MPEG and Fractal algorithms. The use of Wavelets for motion video is
still under development.
EXT CLOCK
The encoder applies the H.261 compression algorithm to the video data. This
consists of the inter-frame difference, Motion Compensation, DCT Transform,
Quantization, Huffman encoding, and BCH coding. Programmable parameters
allow selection of picture resolution and maximum quantization level. Two picture
resolutions are currently provided: Normal and Low. The Low resolution mode
provides one quarter the resolution of the Normal mode. This is useful for low bit
rate applications. The quantization level is a value from 1 to 32 which specifies the
quantizer step size. Lower values provide high picture quality while larger values
yield higher frame rates. Selection of these parameters allows the user to trade off
picture quality for frame rate at a given compressed data bit rate. Other
parameters of the encoding process are adjusted for specific applications. These
include: inter/intra-decision threshold, motion vector search range, and intra-
refresh rate.
Compressed data is clocked out of the Compression Encoder through the Serial
Interface using an internal or external clock. This clock may be at any bit rate from
about 9.6 Kbps to 3 Mbps. Output clock and data are available at either TTL or
RS-422 signal levels. The compressed data stream is then presented to a
telemetry transmitter.
The encoder is designed to automatically adapt so as to always transmit the
highest quality picture under the constraints issued by the user. Since the system
is designed to operate in a PCM data link, the bit rate is predetermined and fixed
by the user. The encoder therefore varies picture quality by automatically adjusting
the quantizer step size. Once the maximum permissible value of quantization has
been reached, frame rate reduction is imposed. However, these degradation
situations are rare and occur only at very low transmitted bit rates, usually below
100 Kbps. Little or no picture degradation is noticeable in a well designed data
link.
The inclusion of digitized video in a dedicated communication link poses only the
challenge of compressing the signal and adding the overhead of forward error
correction (FEC) and in some cases additional coding for the purpose of data
security. However, most data telemetering applications include the measurement
and transmission of other parameters in addition to the video. It is desirable to
merge all of the measured data, including the video, into a single communications
link thus saving hardware and operational costs.
Figure 4 depicts the Aydin Vector PCU-800 (Signal Conditioner and PCM
Multiplexer) including a VCE-800 video sub-system. Multiple video inputs are
capable of being merged with multiple measurement data sources. Space in the
transmitted PCM time division multiplexed (TDM) stream is automatically
allocated by the pre-mission setup and control activity governed by the Aydin
ADASWARE software. Using this software, the user defines the nature and
resolution of the data parameters, the aggregate PCM bit rate and the video
quality parameters (among other information). ADASWARE automatically
allocates the time slot period required for each video stream and provides the
necessary control information to the PCU-800.
V C E-8 0 0
V ideo Encoder
Camera
V C E-8 0 0 PCU-800
Transmitter
V ideo Encoder PCM Encoder
Camera
Transducers
At the receiving site a single PCM stream is received and separated into the
component parts for processing and display. Figure 5 shows a typical ground
station implementation. The received PCM data stream is first bit synchronized,
Viterbi decoded if applicable and frame synchronized. The frame synchronizer
VCD-800
V ideo Decoder
Monitor
GSR-200 PCA -8 0 0
Receiver PCM D e c o m
VCD-800
V ideo Decoder
Monitor
identifies the location in the stream of each measurement source. Most, if not all,
of the "conventional" measurements are extracted at this level for display and pre-
processing. The time slots containing video information are extracted and serially
applied to the VCD-800. Total decoding of the video and conversion to a NTSC,
PAL or S-Video output is accomplished by the VCD sub-system components. An
appropriate video monitor is connected and the system user is presented with a
clear reproduction of the original video.
CONCLUSION
Timothy B. Bougan
Science Applications International Corporation
ABSTRACT
This paper discusses the conceptualization, design, and performance of a unit to fill the
gap between the low-bandwidth analog channel module and the high-end signal
multiplexors. We will discuss how high-speed field-programmable gate arrays (FPGAs)
can be configured to provide a low-cost interface between the digital recorder and the
analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) to capture
and playback the analog signals. Our design focuses on achieving the maximum possible
bandwidth for each analog signal while ensuring that IRIG-A or IRIG-B timecode are
recorded simultaneously (so the analog signals can be later synchronized with their digital
counterparts). We have found that such a solution permits multiple analog signals from 400
KHz up to 3 MHz to be easily and inexpensively recorded on the current generation of
digital recorders. Our conclusions show that such a device can permit most telemetry sites
to transition completely to more reliable, cheaper, and easier-to-maintain digital recorders.
KEYWORDS
INTRODUCTION
The DDU-1004, or “Data Digitizing Unit”, was designed at the request of the USAF to
allow newly purchased Racal Storeplex digital recorders to replace aging analog recorders
in the USAF’s fleet of E-9A aircraft at Tyndall AFB, FL. The E-9A is a telemetry relay
aircraft whose mission is to intercept telemetry signals from drones, missiles, and aircraft
and relay the signals to ground stations for recording and/or processing. The E-9A also
carries telemetry operators to monitor and locally record selected signals. Most of the
signals are pulse code modulation (PCM) and are suited for direct recording on the digital
recorders. The operators do encounter several “legacy” analog signals, however, such as
pulse amplitude modulation (PAM), FM-FM signals, and pre-detect PCM signals that need
to be relayed and recorded beside the PCM streams. These signals range in frequency from
20 KHz to 2 MHz.
The Racal Storeplex recorder is a 51.2 megabit per second digital recorder. Racal provides
digital interface modules that allow bit-serial streams to be recorded at full speed (51.2
Mbs) or power of two fractions thereof (25.6 Mbs, 12.8 Mbs, 6.4 Mbs, etc). Racal also
sells analog modules for the Storeplex, but these boards
have a maximum input bandwidth of 45 KHz (far below the requirement for the E-9A).
Our challenge was to design a two-stage unit. The first stage needed to capture and
digitize multiple analog signals and feed them to the Storeplex at the maximum possible bit
rate (to maximize bandwidth). The second stage must accept a signal from the Storeplex
(on playback), detect the number of signals originally recorded, and accurately reproduce
the original signals.
THEORY OF OPERATION
Our first task was to determine the maximum signal bandwidth that could be recorded by
the Storeplex. We chose an ADC sample size of 8 bits (which met our resolution
requirements). Running at full speed (51.2 megabits per second), and assuming we only
record a single channel, the maximum theoretical recording rate is 51.2M divided by 8 or
6.4 million samples per second. Applying Nyquist’s criteria we arrive at a maximum
theoretical rate of 3.2 MHz.
Our requirements, however, specified that we must record IRIG timecode (in analog form)
along side the signals of interest. This signal requires a minimum of 19 KHz and can be
considered “low-speed” in comparison to our signal of interest. Furthermore, we needed to
design to capability to select the number of high-speed channels to be recorded (between
one and four). We also realized that we would need to insert some synchronization bits
into the stream (so the playback board could determine where each frame of data begins).
Since these sync bits represent “overhead” and directly reduce available bandwidth, we
decided to limit them to one bit per 17 bits of data (for a calculated overhead rate of
5.89%).
Given the above requirements and assumptions, we decided on a sixteen sample “frame”.
Each frame would contain eight sync bits, eight low-speed channel bits (one sample), and
sixteen high-speed samples from each of the selected channels (from one to four). To
prevent jitter we must ensure the samples are evenly spaced. We did this be inserting only
one bit of the sync byte before each high-speed sample, as such:
| SYNC Bit 7 | High-Speed Sample | SYNC Bit 6 | High-Speed Sample | SYNC Bit | ....
Each “high-speed sample” can be from 8 bits (if only one channel is being recorded) to 32
bits (if all four are being recorded). Obviously, we exhaust the sync pattern by interleaving
them with eight high-speed samples. The next eight high-speed channels are interleaved
with the eight bits from a single low-speed channel, as such:
| Low-Speed Bit 7 | High-Speed Sample | Low-Speed Bit 6 | High-Speed Sample | ....
A frame then, can consist of 144 bits, 272 bits, 400 bits, 528 bits (depending on whether
you want to record one, two, three, or four high-speed channels). Each frame is
contiguous... there is no “gap” between frames. Therefore, we can compute recording
bandwidth as follows:
The low-speed channel includes only one sample per frame. Correspondingly, we can
compute its recording bandwidth as follows:
Determining the frame structure and recording bandwidths was the easy part. Realizing a
device that could digitize from two to five signals, construct such frames as described
above, and reconstruct the original signals on playback is a singularly non-trivial task when
you consider the bit stream is 51.2 megabits per second. Using a general purpose
processor was out of the question (most can’t possibly handle the rate and those that can
are very highly priced). Our options were threefold:
1) Design the system using fast discrete logic
2) Design the system using custom semiconductors (ASICs)
3) Design the system using programmable logic (PALs or FPGAs)
We disregarded the second option as being cost prohibitive for our quantity desired. Either
of the other two options would work, but to minimize required space and optimize post-
design flexibility, we opted to use field programmable gate arrays (FPGAs) for the bulk of
the digital design. In particular, we choose to use Xilinx 4005 FPGAs.
Our system also needs some user interface (to select the number of channels to record) and
higher-level computational power (to “detect” the number of channels recorded on
playback). We had a requirement to read a rotary switch and respond to RS-232
commands from a remote controller. To add these additional capabilities without a lot of
overhead, we decided to place a low-cost Intel 8751 microcontroller on the each board.
These microcontrollers have built in serial ports (for the RS-232 control) and have
sufficient I/O lines to read a rotary switch. Additional I/O lines go to the FPGA to
command number of channels to record or detect.
Figure 2 shows the basic layout of the record board; figure 3 shows the playback board.
A/D Convertors
To/From Racal
Analog Inputs
Control Control
Control
Data
RS-232 Control
D/A Convertors
Data Data
To/From Racal
Analog Inputs
Control Control
Control
Data
Intel 8751 Controller
RS-232 Control
The most difficult task was designing the internal loads for the FPGAs. Using Xilinx
supplied tools, we were able to specify the internal loads as schematic diagrams and
compile them into a bit pattern that can be loaded into the FPGAs at run time. Although
the designs themselves are straightforward, the reader should be aware that FPGAs, unlike
discrete logic, can have propagation and routing delays that can vary from compile to
compile (since complex designs can be routed in an infinite number of ways). Designing a
set of state machines, registers, comparators, and shift registers that can run at 51.2 MHz
is difficult at best and requires detailed working knowledge of the operation of both the
routing tools and the target device. The fast clock allows only 19 nsec between transitions,
and state machines must process all decision logic, propagate to the next stage, and meet
setup times all within that narrow window.
The heart of the record FPGA is a series of shift registers, a 16-to-1 multiplexor, and an
extremely fast state machine. The sampled data from each high-speed channel is loaded
into a separate shift register (in line with the other channels). The state machine commands
a simultaneous load for each channel. It then shifts the bits out one at a time.
At the end of the high-speed data, the state machine enables the 16-to-1 multiplexor and
allows one of the 16 inputs to be appended to the serial stream. The inputs to the
multiplexor are the 8 sync bits (hard-wired) and the 8 bits from a register holding the low-
speed data (IRIG). See figure 4 for a block diagram of the serialization circuitry.
Number of Channels
HS Data
"Nth" Bit
HS Data
Shift
HS Data
SYNC Bits
Shift
MUX
LS Data
REG
The 51.2 MHz clock comes from the Racal Storeplex. The “Nth-bit” counter and “Nth-
bit” signal are generated by the internal state machine. The “Nth-bit” refers to the last bit
in a subframe (i.e. the SYNC or low-speed data bit appended to the high-speed data). For
instance, if you are recording only one high-speed channel, the “Nth-bit” would be the
“9th-bit”, since one channel requires 8 bits. For two channels N=17, for three, N=25, and
for all four N=33. The “Nth-bit” counter is a 4-bit counter that cycles from 0 to 15 (and
determines which SYNC or low-speed data bit is appended to the stream).
The “Number of Channels” control comes from the Intel Microcontroller and is set by user
input or remote control.
Reversing the process for playback is more complex since the playback unit must “detect”
how many channels were originally recorded. This involves sequentially shifting bits in
and attempting to reconstruct the sync pattern (which occurs in eight sequential “Nth”
bits). The playback board cycles through the possible number of channels and attempts to
recover sync for each one. If sync is not detected after an entire frame is processed, the
next channel increment is attempted. This process will repeat until successful. Once found,
the sync pattern is predictable and looked for in every frame. If sync was erroneously
detected, or subsequently lost, the playback board restarts the sync acquisition process
(see figure 5).
"Nth" Bit
Shift
Comparator
EQUAL? SYNC?
F/F
SYNC Bits
Check Sync
The microcontroller knows how many channels have been recorded because it sets “N”
(for the “Nth” bit) based on the number of channels it is testing for. When SYNC is
detected (the output from the shift register matches the hardwired SYNC pattern) the
microcontroller stops changing the number of channels to check (until sync is subsequently
lost).
CONCLUSIONS
In February 1996, the US Air Force deployed four DDUs in their E-9A aircraft at Tyndall
AFB, FL. The units have permitted the USAF to permanently retire the old analog
recorders on their E-9A aircraft and still maintain the ability to record legacy analog
signals along with PCM streams. Testing reveals that the DDUs reached their predicted
maximum capability. Because the clock is provided by the Storeplex, and because the
system records data digitally, it is impossible to exceed the predicted capability.
Michel Pureur
DASSAULT-AVIATION
Direction des Essais en Vol
BP 28
13801 ISTRES Cedex
FRANCE
Tel 04 42 56 77 77 Fax 04 42 56 70 03
KEYWORDS
Flight test engineer job, General data analysis, Post test analysis, Grahical User Interface
ABSTRACT
A sophisticated human interface can be developed for Post flight analysis with the
technology of UNIX-MOTIF.
Tests and measurements demand performance and reliability. SAINT-EX can meet these
requirements.
This paper describes the results of an appraoch in the development of DASSAULT
AVIATION’s SAINT-EX software.
INTRODUCTION
Flight Test are the last step of the validation of a new prototype aircraft.
Many tests are necessary for its certificate of airworthiness.
Many thousands of parameters among the tens of thousands available are selected by a
numerical acquisition system in order to be gathered into P.C.M. messages which are
telemesured and recorded on board during the test.
The parameters are taken from the P.C.M. messages by a hardware or software Decom
system, calibrated and stored in FX+ file system. (See FX+ storage by A.Becue. )
SAINT-EX is a user-friendly software for the consultation and the analysis of the
parameters which are spied during the test.
SAINT-EX reduces time and costs of the analysis , it contributes to increasing the quality
of the measures and to optimizing the production of flight test results
GENERALITIES
Functions of :
FUNCTIONALITIES
2D TRACING & PLOTTING
These interfaces present the available attributes in order to personalize the graphic
representations.
The modifications mainly concern the following ressources:
scale and position of the parameter on the graph,
background color, grid color, trace color,
size of symbols,
fonts
CALCULATIONS
The creation of the elaborated parameter can be obtained by formulating the relation of
equality between the name of the chosen parameter and the numerical expression.
SAINT-EX displays the values of the parameters at the indicated time (by placing the
pointer of the mouse and pressing the middle button) A vertical line shows the cursor on
the graph.
The data can be investigated by moving the mouse and pressing the middle button.
This automatically updates the values of the parameters.
The information being studied can be saved in a note-pad at each step to make a synthesis
later on.
NOTE PAD
EXPORT
LISTING
There is a specialized interface which allows the three types of listing to be parametered:
• listing of all occurences of the selected parametres
• listing of occurences corresponding to changing values of selected parameters.
• listing of occurences corresponding to changing values of selected parameters with
substitution and elaboration of explicit information.
CONCLUSION
SAINT-EX has been developed to perform the particular task a Flight test engineer is
faced with.
By the end of the year, SAINT-EX will be connected with an other product from the
research department of DASSAULT -AVIATION « ELFINI » which is specialized in
time frequency , filtering , shock response spectrum , stress analysis and damage
calculation.
The highly visual environment of SAINT-EX will increase the reducing data processing
times and costs.
ACKNOWLEDGEMENTS
The author wishes to thank Mr Jacques Desmazures, Jean Louis Montel, Daniel Faure and
William Vitte for their supports and Mrs Caroll Gacoin, Mr Olivier Champin,Francis
Devillers,Laurent Clouchoux,Michel Bongiorno,Jean Luc Fontaine,Stephan Fructus Mrs
Alexandra Canal for their contributions
FX+
Alain Becue
DASSAULT AVIATION
Direction des Essais en Vol
BP 28
13801 ISTRES Cedex
FRANCE
Tel 04 42 56 77 77 Fax 04 42 56 70 03
KEYWORDS
Flight test engineer job, General data analysis, Post test analysis.
ABSTRACT
With the technological evolution of flying equipment, computing and store capacity we
need to have a new view of the methods of acquisition, storage and archiving data.
HISTORY
For many years, the aircraft industry had acquisition problems, exploitation problems and
storage problems for the data coming from the flight tests of aircraft.
The old technology was such that we could only record sequentially on magnetic tapes.
After we exploited the results by using these tapes. The results were graphs and tables on
paper. All the off-line computing also came from recordings on magnetic tapes. The
objective of efficiency led users to ask to have interactive toolkits to produce the results of
exploitations more quickly. This is possible today because of the development of
computing processing. We had to review the sequential format of data storing, we had to
use the direct access method of data organisation. In the past we used synchronous and
cyclic messages with a fixed organization. Today we must treat messages with a fixed or
variable organization and synchronous or asynchronous or even random messages.
The DASSAULT AVIATION methodology is based on :
• first leading the flight test from the ground by using telemetered information and
processing it with a computer. All applications used for this exploitation are named "real
time".
During a real time application it is very important to be able to follow the history of all
information, cycle by cycle. A sequential recording method is very useful for this criteria.
A data block of the sequential organisation contains an exact number of acquisition cycles,
IRIG long cycle, DANIEL cycle or CE83 cycle. This data organization is also very good
for immediate replay or off-line replay.
The off-line
For the off-line we had two problems. We had to give users data organizations with
exhaustiveness of information able to give a fast exploitation. We also had to be able to
store bigger and bigger volumes of data without the mass storage of the direct access disc
overflowing. Finally we had to be able to process data from many sources (multi flow).
To save the area of disc storage we store data in an as compact organization as possible.
This organization, named BPCM, is able to store the data in its original form. Long cycles
of PCM messages are stored without any processing operation or any validation. This data
organization is able to retain data in a compact form (without using processing
compression software) and without any modification of the original information. With each
BPCM file, we have another file which contains all the rules used to extract and calibrate
the information.
Preliminary remarks
For the off-line exploitation, the user must have all the parameters given by the aircraft in
flight. In most cases there are more parameters than the parameters sent by the aircraft
during the flight period and used in the real time phase.
For each parameter, we must have all the samples with the exact dating of each sample
which sometimes has to be precise to the last microsecond. This is very important for
random data. In some cases it is important to compare results from several aircraft, several
flights or several flight campaigns. We must also be able to use data from many sources :
fighter aircraft, target aircraft, missiles and so on. For these reasons we had to imagine a
new data organization able to integrate the new needs using all new technological
progress.
A few years ago, we created a data organization named FX. This organization was used on
ENCORE and IBM computers. It was able to access the samples of a parameter by using
three levels of indirection and one level of data. Data was in physical values and stored on
four bytes. The dating of the sample was obtained by adding a base time of data block on
eight bytes to a time increment associated to the sample on four bytes. This organization
was limited, the data was only stored on a four byte word and the dating had an increment
of one millisecond.
When we abandoned computers with owner systems and we began to use the UNIX
computers, we created a new data organization named FX+. This new type of data storage
method on disc space is attached to the file structure of the UNIX system. We cannot use
this data organization on computers which are not equipped with the UNIX system.
The main advantages of this organization are described after.
A greater compactness.
Information is stored after decommutation but in the original form (generally two bytes). In
this case we save half the disc space and we preserve the integrity of the data.
Nevertheless the data can be stored in a standard form. The data processing (recentring,
turn-around, concatenation of bit fields ..) and conversion into a physical value must be
done in the exploitation phase of the process. In the exploitation process this operation is
very easy because of the UNIX workstation CPU power. In graphic process this
calibration is like the scaling process (see the SAINT-EX presentation by M Pureur). In all
cases the data organization contains all information to calibrate each parameter. It is very
easy for partners to exchange files by using this data organization which contains all the
information to process the data.
To reduce the volume of the data, a new concept is introduced. When it is not necessary to
store each sample of a parameter and when the value is the same during a time zone we
can indicate by a flag that only the changes of the data must be stored. We can also use
this concept by eliminating the non-significant bits in all data before testing the variation.
The dating of each parameter uses a coded relative time in a two bytes integer. With this
coding we save half the disc space. In the exploitation of the data the real time is
calculated and this is transparent for the user because of the velocity of the computer.
Doing so we obtain a very large gain of disc space keeping direct access to the data. The
great velocity of the modern computer is very interesting in this case.
The FX+ organisation preserves the integrity of data because the real binary state of the
information is stored. If there is later a change in the processing or in the calibration of the
data there is no problem.
The data architecture may accept information of various lengths. It is possible to store
information like DANIEL areas, PCM cycles, RADAR maps etc... In the organization
there are technical blocks. They contain the length of the information and the dating unit.
In most cases we use the millisecond for dating but we can use the nanosecond without
loss of resolution if the time is less than one day.
When we compute parameters after the initialization of the data organization, we can store
them as integer, short integer, character or IEEE float on four or eight bytes if great
precision is necessary. In the technical blocks of the data organization we can also find, for
each parameter, the physical unit and (optional) the realistic minimum and maximum value.
We personalize the coding and the dating of the sample, parameter by parameter.
Direct access.
The FX+ organization is very optimized and is built to be implemented in the UNIX
environment by using the input/output interface of the UNIX file access to permit direct
access. We use all the power of the UNIX file administrator and all the possibilities of the
different levels of the tree like structure of UNIX directories.
UNIX is based on the use of files, it has many efficient tools to handle these files. In the
UNIX world we chose to compact the sample and dating of each parameter in its own
independant file organization.
We created a directory named FX+ with subdirectories for the different aircrafts. In each
subdirectory we found one subdirectory for each flight number. In the directory of the
flight number there is a subdirectory for each parameter.
When we write the full access path of a parameter using the tree like structure described
above we have all the information of the origin of the parameter.
In the name of the parameter you can find in fact the proper name and the PCM message
which delivered the parameter . For each parameter we have a text associated file which
contains the description of the parameter by using keywords. When the parameter is
primary (not a part of another), it is associated with six types of FX+ files which describe
and receive all the information. There is :
• a file which describes all the time zones for which we can access the parameter
• a file which contains the name of all the secondary parameters attached to this
parameter
• for each time zone a subdirectory with an indexation file used for direct access to the
file of values and the dating file.
At the beginning the FX+ organization was created for one aircraft, one flight and one
PCM message. In the organization, parameters were stored for particular time areas.
So we can complete this organization in many ways. We can add new parameters which
are in the same PCM message but we can also add parameters from other PCM messages.
We can also add a new time zone not initially introduced in the organization. At any time
we can add new parameters and new time zones.
When we want to compare information coming from various aircraft and flights we use all
the possibilities of the UNIX file system.
Many users can access this organization at the same time. Data is stored once and can be
accessed by everybody even during updating. It is possible to protect reading access for a
part of the files. The writing on an external media (optical disc, magnetic tape ...) can be
selective.
CONCLUSION
This organization is modern, powerful and economical. Data of various formats can be
handled.
Different sources of information can be multiplexed. Data transfer updating is very easy.
Kim L. Windingland
Lockheed Martin Telemetry & Instrumentation
ABSTRACT
The ever-increasing power of PC hardware combined with the new operating systems
available make the PC an excellent platform for a telemetry system. For applications that
require multiple users or more processing power than a single PC, a network of PCs can be
used to distribute data acquisition and processing tasks. The focus of this paper is on a
distributed network approach to solving telemetry test applications. This approach
maximizes the flexibility and expandability of the system while keeping the initial capital
equipment expenditure low.
KEYWORDS
Windows, PC, Personal Computer, Graphical User Interface (GUI), Spreadsheet, Low
Cost, Network, LAN, Windows 95, Windows NT, Satellite Command.
INTRODUCTION
Since the introduction of the first PC Windows-based telemetry data acquisition system1,
we saw the maturation of Windows-based solutions unfold and an increasing general
understanding that a large percentage of applications can be served using these low-cost
solutions.
This paper presents an approach that was implemented for the development of the Visual
Test System (VTS). The VTS is a multi-stream telemetry data acquisition system using
standard Windows-based PCs to provide decommutation tasks and act as a LAN server for
distributing selected data to a number of client PCs. The client PCs provide remote
display, archiving, and data analysis capabilities. The system makes exclusive use of
industry-standard hardware, software, and network interfaces to provide a cost-effective
telemetry solution.
APPROACH
As shown in Figures 1 and 2, the VTS is designed for telemetry data acquisition, avionics
data acquisition, complex multi-channel signal analysis, data reduction, and simulation. It
is, in fact, a complete ground station. The approach in the system design is to draw on a
family of services and peripherals that are configured for the specific task at hand and to
provide a Software Development Kit (SDK) that provides the ability to implement
solutions for unique applications. This true open architecture system enables users to add
and configure other hardware and software modules from a variety of vendors to suit their
application requirements. It is based on the concept of a Distributed Network Architecture
(DNA), which takes advantage of the following significant emerging technologies:
• Open Systems — Exclusive use of industry-standard hardware, software, and
network interfaces provides immediate cost-effective solutions with ease of future
expansion as technologies advance.
• Network Switching — Taking advantage of switched Ethernet, 100 Mbps
Ethernet, and/or ATM provides the performance fabric(s) to allow simultaneous
intercommunication between functional elements using well supported network
standards.
• Network Operation Systems — Designed from the ground up to support Windows
3.1 and Windows NT, the system is compatible with a wide assortment of
Windows applications and is accessible to an extensive array of network servers.
Inputs Outputs
Processing
PCM/Sim Graphical
Compute Compute Displays
Server(s) Server(s)
Time
Disk Tape
Strip
Analog/
Digital Charts
Storage
Network Switch
st r o P gol a n A
st r o P gol a n A
c ny Sti B
c ny Sti B
moce D
moce D
G
G
Ethernet
I RI
I RI
Pentium Rackmount PC Pentium Rackmount PC
Server 1 Server 2
Figure 2. Example of an Existing System Using Two Servers and Ten Clients
COMPONENTS
Connected to the network are all the necessary components for high-speed data acquisition
of multiple streams. When assembled, these components solve all problems of data
acquisition. The system can archive, display, process, plot, and play back data in real time.
Individual components are discussed in the following paragraphs:
• Front Ends acquire multiple streams of data (PCM telemetry, analog, digital, voice,
video, MIL-STD-1553, etc.) with IRIG time-stamping capability.
• File Servers archive data to storage media and provide an interface for playback.
• Compute Servers provide intense data processing.
• Satellite uplink/downlink capabilities provide command control and verification.
• Displays can be updated with real-time, archived, and processed data.
• Hard Copy peripherals produce time-correlated data for strip chart recorders,
plotters, and printers.
• Third-Party Vendor networking products complete the system with general-
purpose peripherals (fax/modem servers, e-mail servers, database servers, Internet
connectivity, etc.).
The use of off-the-shelf components allows a given user to scale and expand the system
depending on current requirements. New requirements can be added in the future simply
by adding components to the existing system. Various components can be mixed and
matched to complete the system and meet specific requirements. The system can start out
as a simple chassis with a single-stream capability and later be expanded to a system that
supports multiple streams with multiple users.
The Front Ends draw on a library of existing modules for high-performance data
acquisition. During data acquisition, the front ends broadcast acquired data over the
network for all the other sections. Decommutators, demodulators, RF receivers, bit
synchronizers, video, A/D converters, and IRIG time readers are used to acquire and lock
onto any telemetry stream. Many third-party analog and digital modules accomplish
complex multi-channel signal analysis. Data acquisition modules exist for MIL-STD-1553,
ARINC 429/629, and STANAG 3838/3910 avionics buses. Other input types can be used
alone or combined in the system. In all cases, multiples of each input type can be used.
File Servers archive and play back data. Any industry-standard storage device is
acceptable including digital tape, analog tape, hard disk, and removable media. Larger,
faster storage devices that become available as technology advances can easily be added to
the system to improve throughput.
Strip chart recorders, color plotters, and laser printers produce hard copies for further
analysis. In today's market, several vendors provide solutions. The system easily interfaces
to these devices using industry standards for networked communication. As cheaper, faster
peripherals become available, they too can easily be integrated into the system.
Third-party networking products may be used to further enhance the overall data
acquisition system. These would include, but are not limited to, fax/modem servers, e-mail
servers, database servers, Internet connectivity devices, and many others. The system’s
advanced networking capabilities allow multiple sites to simultaneously monitor data (e.g.,
a system could be developed that monitors data in Orlando and Los Angeles at the same
time).
All system components are accessible through the main system page which shows system
status at a glance. This “Heartbeat” page is shown in Figure 3.
A key advantage of the VTS is that the system matures as technology advances. The
system already has a built-in path for upgradability to tomorrow’s faster hard disks, color
plotters, and high-speed processors. To illustrate the evolution of a system, consider the
following scenario:
A user has a single-stream system for displaying data only. This system consists of a PC,
the front end (PC bit sync, decom, etc.), and a monitor. Later, if that user needs to archive
data, he/she can simply add a PC hard disk. If the user needs to add 1553 data, he/she can
simply add a module to the front end or add a new front end. If more display or processing
capabilities are required, additional displays and compute servers can be added to the
network. Eventually, the system evolves into a multi-stream system. The system archives
all data and displays data at separate locations. In time, the system would expand to
include several PCs with the necessary front ends all connected over a switched network.
On the network would be several file servers that handle archiving, several terminals that
display data, compute servers for performing intense real-time processing, and strip chart
recorders for producing strip charts. During operation, data is acquired in parallel and
distributed over the network in real time for all devices.
SUMMARY
The VTS represents the next generation in PC-based data acquisition systems. Its
Distributed Network Architecture takes advantage of emerging technologies. As an open
system, the VTS is an immediate cost-effective telemetry solution with a built-in plan for
future expansion as hardware, software, and network interface technology advances. The
use of switched networks allows simultaneous communication between functional
elements using well supported network standards. In addition, the system is designed to
support Windows 3.1 and Windows NT, giving users compatibility with a wide assortment
of Windows applications.
REFERENCES
1.Windingland, Kim and LaPlante, John, “Telemetry System User Interface for Windows,”
Proceedings of the International Telemetering Conference, Volume XXIX, 1993, pp.
571-580.
2.Cardinal, Robert, “Telemetry Enterprise Switched Networking,” Proceedings of the
International Telemetering Conference, Volume XXX, 1994, pp. 140-147.
Open Systems Architecture in a COTS environment*
Alan R. Stottlemyer
Kevin M. Hassett
ABSTRACT
A distributed architecture framework has been developed for NASA at Goddard Space
Flight Center (GSFC) as the basis for developing an extended series of space mission
support data systems. The architecture is designed to include both mission development
and operations. It specifically addresses the problems of standardizing a framework for
which commercial off-the-shelf (COTS) applications and infrastructure are expected to
provide most of the components of the systems. The resulting distributed architecture is
developed based on a combination of a layered architecture, and carefully selected open
standards. The layering provides the needed flexibility in mission design to support the
wide variability of mission requirements. The standards are selected to address the most
important interfaces, while not over constraining the implementation options.
KEYWORDS
INTRODUCTION
The architecture must enable maximum re-use for system components, and
enhanced ability to integrate such components effectively and efficiently. Because the
*
This paper is declared a work of the U.S. government and is not subject to copyright protection in the United States
number of such space missions is relatively low, component re-use only within NASA
missions would not achieve the cost reductions needed, and a high level of use of
commercial software and hardware is required. Further, the architecture must maximize the
available choices and avoid dependence on any single vendor for critical components.
Existing standards for integration have addressed infrastructure levels, and provided little
definition for application level interfaces and functionality. Past experience demonstrates
that standardization at the application level frequently had the effect of severely limiting
flexibility and use of COTS components because the standards were not widely supported.
The Open Systems standards need to allow greater freedom to choose computer platforms,
information repository and application products, and commercial mechanisms supporting
application interconnection and data transport.
Over the last year and a half, the architecture has evolved rapidly, and been validated through an
associated prototyping program. The resulting architecture is presented in the following sections, with a
complete documentation in reference 1.
The system architecture combines three primary features. First, a layered design is used
which enhances flexibility by information hiding. Second, the software client-server
concept is preserved as a characteristic of a specific component interaction, rather than of
the component as a whole. Third, the hardware architecture is based on both three-tier
client-server and peer-to-peer configurations.
The layered design is shown in Figure 1. The design is based on a combination of the
Open System Interconnect (OSI) communications reference model and the National
Institute for Standards and Technology (NIST) Open System Environment (OSE)
reference model. The OSI model is used because of the distributed system architecture.
The OSE model provides a good foundation for mapping the structure within a computer
platform. The combination is necessary to provide the overall structure, with some
elements mapped to specific OSI layers and others to OSE layers.
On the left side of the diagram are shown the OSI reference layers (in parentheses), and
the architecture elements mapped to them. On the right, the OSE model of platform
environment and platform services is followed. The “network transparency” element
serves to integrate the two into a common application program interface (API).
Network Platform-Dependent
(Layer 7) Applications Applications
Network Transparency
Middleware
(Layers 5 & 6)
Data Transport
Platform (Layer 4) TCP, UDP Platform
Services
Internet Protocol
(Layer 3)
O/S Services
Platform Hardware Resources
Some applications will be constructed to use platform specific interfaces. In some cases
this is done to enhance application technical performance. In other cases it results from
either market pressures, or simply the lack of an interface to network standards. Such
applications are supported by a standard interface to platform services, such as defined by
the NIST Application Portability Profile (reference 2), X/Open UNIX, or Microsoft
Windows WIN32 API.
The software application relationships are client-server for a given interaction. However, a
software component can support both client and server interactions, and simultaneously
when multithreaded. Some components will be constructed as pure servers or clients, but
others will support both relationships. In general, both simple client-server (C/S) and
three-tier application designs are expected. Additionally, some applications in our domain
are better designed as data driven pipe and filter (P&F) style. Figure 2 illustrates these
types of application relationships.
Analysis Web
Client Browser
C/S Connection
HCI
Unit Event Trend
Conversion Filter Management
Analysis
Client
The hardware architecture combines client-server models with peer to peer. Some
components, e.g., network servers, database servers and print servers, will be server
platforms in larger systems, but may be combined with other allocated functions in
standard workstations for smaller systems. Similarly, there will be application-algorithm
processing platforms in larger systems, built to enhance the capability for massive
computations required, such as image processing from earth sensing instruments. There are
likely to be true client platforms, in the form of either individual PCs or X-terminals, for all
systems. However, for many smaller systems, there will be a common workstation
platform for all except remote clients. In this latter case, the peer to peer network may
work better.
Standards selection is both critical to success, and a potential obstacle. Standards are
needed to support changes in system configurations, but use of poorly supported standards
may, in fact, preclude access to COTS components. The approach to establishing
standards was to establish a very small core of widely supported, mandatory standards for
global compatibility. The mandatory standards are supplemented by a more extensive set
from which to select for a given system implementation. This provides compatibility within
a mission system, and also the flexibility to support the differing requirements between
mission systems.
The core standards include the Internet Protocol (IP) suite for network support, and both
UNIX and Windows NT for platform support. The overall intent was to keep the number
of mandatory standards limited to those with truly global support, and to select only those
for which there was a clear benefit to establishing a standard. The flexibility within the set
has been given at least preliminary trial in the prototyping environment. A complete
discussion of our standards approach is provided in references (3) and (4).
IMPLEMENTATION CONCEPT
The resulting designs can range from a large network of high performance platforms
distributed around the world, to a few PC platforms attached to a local radio frequency
subsystem with Web connections for remote access. By using the capability selection
model, the low end systems can be built with low cost COTS components to match their
needs, and, where appropriate, have common components with the largest of systems.
Initial prototypes have provided validation of some areas of the approach. Implementations
using COTS products with message and file servers for network transparency have been
successful. A small set of prototypes using a CORBA integration mechanism have been
performed, linking both COTS and legacy applications. The CORBA integration has
proven successful in both function and cost. Presentation of these results in full is provided
in reference 5.
Avoiding over-specification has also proven valuable. COTS products not identified
specifically with space applications have been productively included into the prototyping
environment. Analysis support and data manipulation packages have proven useful in the
same way that spreadsheets can be used for many applications. Products covering non-
UNIX platforms have been identified in the prototyping effort, validating this approach to
low cost mission system development.
CONCLUSION
Use of this architecture for constructing mission systems provides clear benefits.
Specification of an infrastructure based on IP, and on UNIX and Windows NT platforms,
provides a common foundation for which there is a rich set of available commercial
products. Use of the layered approach, and the network transparency component, has led
to lower cost integration approaches, based on commercial transparency support, and with
direct application integration. Standards for network transparency support, e.g., DCE,
CORBA and Java run-time services, are proving to be usable with real commercial
support, and with available applications using the interfaces.
The initial prototyping experience has been very successful, indicating the viability of the
architecture. However, much work remains before we understand the full implications of
the approach. More prototypes and a pilot project are planned to enhance our knowledge.
Identification of more effective products and those to cover unusual needs will occur in
this process. Tuning of both product selection and the implementation process will be
needed to reliably achieve the cost and schedule goals.
ACKNOWLEDGMENTS
REFERENCES
2. Application Portability Profile, OSE/1 Version 2.0, NIST Special Publication 500-
210, 1993
3. Stottlemyer, Alan, and Hassett, Kevin, “The Role of Standards in COTS Integration
Projects,” Proceedings of 32nd Annual International Telemetering Conference,
San Diego, California, October, 1996.
5. Scheidker, Eric, Rashkin, Robert, Pendley, Rex, Werking, Roger, Bonnin, Magda,
Copella, John, and Bracken, Mike, “IMACCS: A progress report of
NASA/GSFC’s COTS-based ground support systems, and extensions into new
domains,” Proceedings of 32nd Annual International Telemetering Conference,
San Diego, California, October, 1996.
COMMERCIAL-OFF-THE-SHELF TELEMETRY FRONT-END
PROTOTYPING
Keith Hogie
Jim Weekley
Jeremy Jacobsohn
ABSTRACT
The world of data communication and networking has grown rapidly over the last decade,
and this growth has been accompanied by the development of standards that reflect and
facilitate the need for commercial products that work together in a reliable, robust, and
coherent fashion. To a great extent this commercialization, with its increasing performance
and diminishing cost, has not been adapted to the data communication needs of satellites.
As budgets and mission development and deployment timelines shrink, space exploration
and science will require the development of standards and the use of increasing amounts of
off-the-shelf hardware and software for integrated satellite ground systems.
The Renaissance project at NASA/Goddard Space Flight Center has engaged in rapid
prototyping of ground systems using off-the-shelf hardware and software products to
identify ways of implementing satellite ground systems "faster, better, cheaper". This paper
presents various aspects of these activities, including issues related to the configuration
and integration of current off-the-shelf products using telemetry databases for existing
spacecraft, an analysis of issues related to the development of standard products for
satellite communication, tradeoffs between hardware and software approaches to
performing telemetry front-end processing functions, and proposals for future standards
and development.
KEYWORDS
INTRODUCTION
In spring 1995, the Mission Operations and Data Systems Directorate at NASA/GSFC
initiated an effort to construct a satellite ground system in 90 days using commercial-off-
the-shelf (COTS) hardware and software. The goal of this activity was to demonstrate that
a small group of experienced engineers could integrate various pieces of commercial
hardware and software into a viable system that would meet the operational needs of an
existing spacecraft at a fraction of the cost required to develop a more traditional (i.e.,
custom) solution. The system was completed within the 90-day time frame and processed
live data from the SAMPEX spacecraft simultaneously with the operational systems. The
real-time portion of the system supported the receipt of telemetry and tracking data and
provided state modeling capabilities for real-time satellite status monitoring and
commanding. The off-line portion of the system supported orbit and attitude determination
and additional data analysis capabilities.
This prototype provided a good vehicle for investigating the configuration and integration
of current COTS products to construct a ground system for a real spacecraft. For the initial
configuration, the Loral Test and Instrumentation Systems (LTIS, now Lockheed Martin
Telemetry and Instrumentation (LMTI)) model 550 communication front-end was used.
Since then other front-end systems have been inspected, and results of those experiences
are presented below.
FRONT-END PROCESSING
The fundamental capabilities of satellite front-end processing can be stated in the following
three areas:
1) Receive data from the satellite data sources. Typically this data is presented over
standard interfaces in some predefined format (RS-422, TTL, Ethernet, Nascom block,
CCSDS frame).
2) Isolate particular data fields and place the data in an acceptable form (CCSDS
packet extraction, telemetry frame extraction, bit/byte reversal, engineering unit
conversion).
3) Make the data available to other applications (custom application programming
interfaces, UDP/TCP sockets).
A transmission capability is also required in order to transmit commands and data to the
spacecraft.
For our satellite interfaces the majority of our work was done with Nascom 4800 bit
blocks. The blocks were received over RS-422 serial interfaces at rates of 224 Kbps and 1
Mbps. The Nascom 4800 bit blocks provided a basic mechanism for delivering a satellite
bitstream from the ground antenna to our front-end. The satellite data inside the blocks
was then extracted and passed through another synchronizer to locate the CCSDS data
frames from the spacecraft. CCSDS packets were then extracted from the frames, and the
packets were decomposed into their individual telemetry fields such as battery voltages,
currents, temperatures, etc. With the LMTI 550 front-end, this processing was
accomplished in the front-end and current value tables were passed to workstations for
processing. With the other products, the extraction of telemetry fields was performed in
software along with the other ground system applications on the workstations.
We note that the interfaces between the telemetry processing components and the ground
system applications varied widely. Each product had a different application interface, and
some programming was required in each instance in order to integrate the front-end with
our other applications. We are currently examining the use of the Common Object Request
Broker Architecture (CORBA) as a mechanism for hiding these differences and providing
a more consistent interface between the front-end functions and the application programs.
FRONT-END ARCHITECTURES
Our activities included work with the following four front-end products:
The most complete prototype was constructed with the LMTI 550 front-end which was
configured to receive live data from the SAMPEX spacecraft and fully process over 3000
telemetry parameters. The other three products were used in later prototypes which were
not as fully configured but were performed to compare and contrast the various products.
LMTI 550
The LMTI 550 hardware front-end utilizes a dual-bus architecture built on a VME chassis
using VXworks. It passed data to other ground system applications executing on HP and
Sun workstations using its data gather application program interface (API). The front-end
chassis contained the following cards:
• Serial interfaces for RS-422 and TTL signals, frames synchronization, CRC checks
• LMTI field programmable processors (FPP), basically SPARC 10 processors, which
were configured to perform packet extraction, field extraction and engineering unit
conversion
• Controller board that provided an Ethernet interface between the front-end and the
workstations
• SCSI interface and hard disk to support data logging and retrieval on the front-end
workstation
S ync
P roce ss data gathe r
Control / muxwrite
The LMTI processes data by passing tag and data values on its high speed multiplex bus.
Algorithms resident on the LMTI 550 FPPs are triggered as particular tagged data values
appear on the bus. They receive the data value, perform various operations, and place the
results back on the bus under a new tag value. These algorithms can be selected from
standard libraries, or custom algorithms can be written in C. We configured the tagged
data flows and algorithm processing on the LMTI 550 by developing PERL scripts to read
spacecraft telemetry definitions from a satellite project database (PDB) and to generate an
ASCII configuration file that could be imported by the LMTI 550. We also modified the
source code for some LMTI 550 standard algorithms to produce custom algorithms.
A current value table of telemetry values was generated on the LMTI 550, gathered on the
workstations, and fed to a data server to distribute to other applications for displays and
spacecraft state modeling. Some displays were created with the LMTI display capability.
Both LMTI and our state modeling product used Dataviews as their display engine which
supported similar displays with either package.
The LMTI 550 provided a mature, flexible system with good documentation. Other
configurations are available; for example, a frame synchronizer and controller can be
provided in the chassis and all further processing can be done in software on a separate
workstation
OS/COMET
The Open Systems COMET (OS/COMET) package from Software Technology, Inc. is a
software product that takes data once it has been captured and performs telemetry
decommutation, analysis, distribution and display. It does not have any specific hardware
components. It requires the development of device drivers that interface with hardware
interfaces and pass data to and from its “software bus”. Once data is on the OS/COMET
software bus in the proper format, other processes such as the telemetry decommutation
(TLM) process can be configured to process the data.
In OS/COMET, data to be shared among processes is stored in memory files (MFILEs)
which are accessible across distributed platforms. MFILE definitions identify
characteristics of the data fields such as type (e.g., character, integer, float), limit check
ranges, units and descriptive information. A common mechanism for getting data into
MFILEs is via the TLM process taking raw data off the software bus, performing field
extraction and engineering unit conversion as specified in ASCII configuration files, and
placing the resulting values into the indicated MFILEs. Other standard processes supplied
with OS/COMET include character based display windows, graphical plotting tools, and
the Sherrill-Lubinski Graphical Modeling System (SL-GMS) user interface. SL-GMS can
be used to generate complex graphical displays based on data from the MFILEs. These
applications are all integrated with the OS/COMET MFILE mechanism to provide display
and modification of the values in the MFILEs.
wo rkstation
MFILE Come t
Applications
TLM
I/O Drive r
Serial to IP gateway
The system was configured by modifying the PERL scripts used to configure the LMTI
550 from a satellite project database. The main change was to generate output in
OS/COMET formats to define MFILEs and configure the TLM process instead of the
LMTI formats. Telemetry packets in Ethernet/IP were the data source and a device driver
was developed to get our data into the OS/COMET environment. Other data sources could
be I/O cards installed in the workstation with appropriate drivers to integrate them into the
OS/COMET environment.
OS/COMET provided a convenient environment for displaying and changing data values
in the MFILEs. However, some problems were encountered with OS/COMET while
creating the configuration files for the telemetry parameters in the MFILE and the
telemetry decommutation configuration file. There were some inconsistencies in the
documentation and the version of software in use. Also, when the entire set of
OS/COMET processes was activated for processing all 3000 parameters for a satellite,
some processes requested up to 60 MB of operating system swap space. This was more
than the other packages required.
The OS/COMET product requires C programming to integrate telemetry and command
interfaces into the overall OS/COMET software environment. It also does not provide any
standard support for the CCSDS protocols so other systems were used to get data into
CCSDS packet formats for input.
Veda ITAS/OMEGA
The Veda front-end products investigated were their ITAS hardware front-end units and
their Omega software package. The ITAS hardware product was not used but testing was
done with the Omega software package which can execute on both the ITAS front-end
hardware as well on our workstations. An interesting feature of the Omega software is that
it could perform frame synchronization on a stream of data without any front-end hardware
synchronizer. It supported a software frame synchronization capable of executing at up to
2 Mbps.
The Omega configuration options were mostly similar to the LMTI 550 except that the
Omega software did not provide any CCSDS processing capabilities such as packet
assembly. Also, if the software did not provide a required capability, our only option was
to go back to the vendor for changes. Display generation was similar to the LMTI since
Omega also uses the Dataviews package.
wo rkstation
O me g a
O me ga Applications
de comm
synch UDP/TCP
ITAS front-e nd
No real-time data was processed through the Omega software because at that time we did
not have their hardware front-end and Omega did not support any data sources other that
the ITAS front-end and data files on disk. Since then the product has been modified to
support TCP socket interfaces. Also, Omega’s interface to other applications did not
provide as many options as the LMTI 550 and OS/COMET. It basically provides
parameter tag and data information in UDP packets and lets a user develop software to
select desired information.
LabView
The LabView product from National Instruments is a software product that provides
extensive graphical display capabilities combined with a graphical programming system.
The programming environment is oriented toward the development and operation of
programs called “virtual instruments” (VI) which control hardware interfaces and integrate
into the overall LabView processing and display environment.
A feature of this package is that it is supported on a wide range of platforms and operating
systems such as MacOS, Windows 95, Windows NT, Solaris, HPUX. Components can be
developed in one environment and then used on the other environments.
One of our main interests in this product was to see how it worked as a tool for use during
integration and test of satellite components and then during operation. The goal here is to
provide a consistent tool and user interface during the entire spacecraft life cycle.
workstation or P C
LabV ie w
Application s
VI d e comm
Serial to IP gateway
The LabView system is oriented more toward development of graphical interfaces for
laboratory and instrumentation systems. It is well suited for integration and test of devices
during spacecraft construction. We are currently investigating its use as a system for
building a full ground system for spacecraft operation. Like most of the other packages, the
LabView software does not provide any specific support for CCSDS packet extraction and
assembly. We are using other systems to break a data stream down to the CCSDS packet
level and then passing the packets to LabView.
LESSONS LEARNED
The main lesson learned was that configuring telemetry front-ends is still a very time
consuming, error prone, and custom process. All of the products we examined provide
point-and-click options for various parts of the configuration process. However, none of
the products could be quickly and easily configured to support all of our requirements. We
feel that this in not so much the fault of the products as a reflection of the very wide range
of telemetry options in use and the lack of sufficient commonality among them to allow the
vendors to develop standard solutions. The following sections discuss some key problem
areas along with recommendations for future solutions.
Wide variety of Nascom block types
The first problem encountered involved the format of the satellite bitstream data packed
inside Nascom 4800 bit blocks. Data was received from both the NASA Wallops Flight
Facility and Deep Space Network earth stations. Even though both systems use 4800 bit
blocks, different formats are used for packing data into the blocks. Since our system also
performed satellite orbit determination it needed to process tracking data which was also in
different formats from the different networks.
Since Nascom blocks are not self-identifying, the solution was a crude one of identifying
each of the possible combinations of source/destination codes, spacecraft ID, and data
type fields and specifying separate data processing paths for tracking data and bitstream
data for CCSDS frame processing. This was not difficult to accomplish, but it is not an
elegant solution since it is difficult to extend to other missions and is quite prone to error.
It reflects the fact that the legacy systems in use are based on a wide range of format
options that exist across missions.
The primary solution here must come from more consistent and easily identifiable data
formats from all agencies’ earth stations. When that consistency is available, COTS
products can be developed with standard options to handle this task with less
development. Some future NASA missions are moving toward synchronizing CCSDS
frames at the earth station and forwarding the frames via TCP/UDP based technology. This
should be very helpful in eliminating the need for custom serial interfaces and special
frame synchronizers at all of the users systems.
The LMTI front-end had the most extensive CCSDS processing capabilities but even it did
not support CCSDS packet telemetry formats when we began using it. The CCSDS packet
telemetry capability was in testing at the time and was received in the summer of 1995 and
used in our first prototype. None of the other products provided any native CCSDS
processing options in their system. However, the Veda products can be configured to
synchronize CCSDS frames just like any other frames. Veda also indicated they had some
prototype code for CCSDS packet extraction but that was not tested. Also, even though
the LabView product does not provide any native CCSDS capabilities, vendors have
developed interface cards along with LabView virtual instrument software to support
CCSDS processing. LabView in combination with these other products can provide a full
range of CCSDS capabilities.
However, vendors are not entirely at fault for lack of CCSDS support. One problem is that
the CCSDS recommendations allow a wide range of options for items such as packet
sizes, packet types and data formats and it is very difficult for any vendor to develop
products that can support all of the possible CCSDS options. A standard definition of how
to use a subset CCSDS options would be very useful.
Each spacecraft investigated defines telemetry field types and bit and byte ordering with
the byte orderings such as those defined in Table 1. Also, bit fields were defined which
require first reordering bytes according to the data type and then finding the proper bits.
The basic problem is that there are different understandings of bit/byte reversal options, bit
ordering (MSB or LSB), and byte orderings, so that each product needs further
programming and configuration to produce correct results. Also, fields can be up to 8 bytes
long and that was not supported on all products. None of the front-ends provided standard
field definitions for all field types for all missions examined.
Time fields are another special area with many different formats (i.e., 4, 5, 6 and 8 bytes in
various orders) that are not supported by standard configuration options on COTS
products. Further, once the time field bytes are in order, many formats exist along with
different start times.
This can be addressed by identifying each possible field format and associating a standard
name to the field format. Then spacecraft and ground system developers, vendors, and
integrators will have a better chance of developing and using products that are quicker and
easier to configure.
CCSDS commanding protocol and formats not supported in current vendors
products
For sending data to the spacecraft, our missions all used the CCSDS COP protocol. This
involves breaking data to be transmitted into small groups of bytes, adding checksums,
exclusive-ORing, and other special formatting. During transmission there are also special
protocols to follow for flow control and acknowledgment. If this data is to be delivered to
the earth station using Nascom blocks it must also be packed into the blocks in the proper
format. The basic problem is that none of the front-ends utilized had options for supporting
this entire process. Our solution was to write C code to perform all the data formatting and
build Nascom blocks. From this point, the standard front-end capabilities for transmitting
Nascom blocks could be utilized. Nascom blocks need to be removed from this process
and a protocol more like TCP should be developed.
Finally, all of the front-ends provided different mechanisms for exchanging data with
application processes. These interfaces can be summarized as follows:
• LMTI 550 data/gather and muxwrite - proprietary LMTI API that provides flexible
capabilities for transferring data between applications and any stage of front-end
processing
• OS/Comet MFILEs and API - shared memory files and proprietary STI API that
provide a full data server environment
• Veda Omega API - TCP/UDP socket API providing blocked tag/data pairs
• LabView VI - support for TCP/UDP socket capabilities
Standard APIs would simplify integration and allow easier upgrade or replacement of
front-ends. We are currently investigating the application of CORBA to this problem.
CONCLUSION
COTS products are available and can be configured to meet satellite front-end
requirements. However, the configuration and integration process still requires extensive
customization due to the wide range of different spacecraft data formats, front-end
configurations and interfaces to applications. If satellites are to be deployed “faster, better,
cheaper” more work is needed to develop widely used standards for:
• capabilities for synchronizing data as it is received from the antenna and passing it
to users with standard network technology, such as TCP/UDP/IP, to greatly reduce
the need for costly and complex custom front-ends
• common definitions for telemetry field types (e.g., UB, UI) and byte orders so
vendors can provide corresponding configuration options
• more precise definitions for the usage of CCSDS protocols so vendors can
implement these as standard options
• common application interfaces to front-ends using technologies such as CORBA to
hide the differences of each front-end and allow replacing front-ends without
impacting applications
The only way to achieve NASA’s goals of “faster, better, cheaper” is to get more well-
defined standards in place to allow system integration to be done with plug-and-play
components configured from standard, vendor provided options. This has been done with
LAN and WAN data links, network equipment and distributed applications, and similar
concepts can be applied to space.
Achieving maximum ease of integration requires much more work on end-to-end designs
to ensure better integration of spacecraft, communication links and ground systems.
Without this, integrating satellite ground systems will remain a time consuming, custom
and expensive process.
ACKNOWLEDGMENTS
REFERENCES
1. Data Format Control Document for the SAMPEX Project Data Base, 511-4DFC-0290,
Mission Operations and Data Systems Directorate, NASA/GSFC, January 1993
2. Data Format Control Document for the Project Data Base Supporting the XTE Mission
Operations Center, 510-4DFC-0193, Mission Operations and Data Systems Directorate,
NASA/GSFC, August 1995
TURNKEY TELEMETRY DATA ACQUISITION AND PROCESSING
SYSTEMS UTILIZING COMMERCIAL OFF THE SHELF
(COTS) PRODUCTS
Amro M. Alawady
Test Engineering Specialist - Telemetry and Testdata
Lockheed Martin Vought Systems
PO Box 650003, M/S EM-25
Dallas, TX 75265-0003
ABSTRACT
This paper discusses turnkey telemetry data acquisition and analysis systems. A
brief history of previous systems used at Lockheed Martin Vought Systems is
presented. Then, the paper describes systems that utilize more COTS hardware
and software and discusses the time and resources saved by integrating these
products into a complete system along with a description of what some newer
systems will offer.
KEY WORDS
Introduction
The Telemetry and Test Data group is responsible for instrumentation, telemetry
data transmission, acquisition, real-time displays and post-test processing of test
data. Multiple data streams can be gathered from missile telemetry, simulations,
launcher, and/or other instrumentation. In general, data is gathered, displayed,
processed to engineering units and delivered to project engineering personnel for
analysis.
In previous years, the hardware and software used to acquire and process data
was a mix of vendor hardware, in house built hardware and custom written
software. Because of coexisting projects and multiple telemetry streams within
projects, the data acquisition systems were required to handle varying types of
telemetry streams. These streams ranged from MIL-STD 1553, PAM and PCM to
non IRIG standards or modified versions of the IRIG standards. Up until a few
years ago, many vendors worked only with the IRIG or MIL-STD definitions, and
the products they developed for these standards offered many more functions
than were necessary for data acquisition. Because of this, hardware and software
was developed in house to handle varying input streams into a common hardware
interface and software processing routine.
Only recently have vendors started offering systems that allow for varying input
streams, from IRIG to MIL-STD to user defined streams. They are also providing
software tools, by way of off the shelf products, for quick and easy modifications to
their systems. This shifts the time and resources spent on custom developed
systems to developing complete turnkey systems based on industry standard
hardware and software.
The course of this paper will take us from previous versions of data acquisition
and processing systems to current and future versions, emphasizing the reduction
of in house developed hardware and software. Through this evolution, the role of a
telemetry and test engineer changes from one of a hardware designer or software
programmer to a systems engineer, concentrating on the whole telemetry process
from acquisition to analysis.
Previous Systems
The previous versions of data acquisition and processing systems were a small
step towards elimination of custom developed systems. Figure 1 shows this
version of a data acquisition and processing system. Hardware was broken down
into two main blocks; telemetry decommutation and data acquisition and
processing.
and the flow of data in the computer. Once data was in memory, more custom
software was developed to take the data, acquire it to disk, and/or display the data
to dumb terminals. Real-time displays consisted mostly of scaled data displayed in
alphanumeric format in two columns of twenty parameters, with screen updates to
1 Hz. An alternate format was to display the data in a time history based plot of up
to eight parameters per screen with screen updates to 10 Hz. The alphanumeric
displays were made compatible with the VAX screen manager subroutines and
the plots were made compatible with the Tektronix 4014 format.
Data processing was also run on the MicroVAX. Again, custom software was
developed so that databases and processing routines could handle the mix of
telemetry streams. The output generated from the post test processing routine
was in the form of tabular listings, time history plots and further calculation files
(scaled data written to a record oriented file).
The main advantage this custom built system had was that any type of telemetry
format could be input into the system. The interface board not only time tagged
each frame of data, but a header word was also added to the beginning of each
frame. This had the effect of creating a common data format. Once in the system,
software was used to identify the differences between the streams, and data was
scaled accordingly. Also, since this system could handle any type of stream, it
could be used for any project and only parameter scaling information had to be
changed to acquire and process the data. The disadvantage of this system, was
that a lot of resources were used to build the system and little time was spent on
the actual scaling and processing of data.
Current Systems
The MicroVAX system utilized almost all custom written software in three
languages. The DMA board software and acquisition software were written in
assembly language. FORTRAN and C were used for real-time and post test
processing. The rack mount hardware was big and bulky, and setup for each unit
was generally a manual procedure. Around this time, telemetry hardware as well
as software had been developed to comply more openly with industry standards.
Rack mount hardware was now being redesigned to board level products
conforming to the industry standard VME bus architecture. A VME based
computer was the natural selection for the acquisition and processing system. The
selected computer also met new requirements of industry standards. A UNIX
based system was selected due to the greater amount of off the shelf software
being written for the operating system. Also, the selection of a VME base
computer used in conjunction with VME based telemetry hardware combined the
previous separate systems of rack and computer to one rack mount computer with
telemetry products integrated into the computer. See figure 2. Time code boards
and frame synchronizer boards now plugged into the same bus as the computer.
These boards were provided with a set of software drivers to help cut down on
custom software development. With the call of a few subroutines, a PCM
decommutator board could be initialized and loaded with a telemetry format ready
for decommutation.
Figure 2
Although custom written software was still developed, the focus of the code turned
to real-time scaling and processing. No more code was written for memory
management, real-time displays and post test analysis. DMA boards were
provided with a set of application programming interfaces (API’s) for setup and
initialization. Dataviews software was selected to develop real-time displays. It
contained both a draw section to develop the look and feel of the displays and a
set of API’s to integrate real-time data with the displays. Unlike the older
MicroVAX systems, views can be a combination of alphanumeric, plot or strip
chart data and objects such as buttons, dials and images.
On previous systems, post test processing and analysis generally occurred in the
form of tabular listings of scaled data or hardcopy time history plots. Test data
could last anywhere from a few seconds to several hours. This meant that vast
amounts of paper were printed or “thinned” data was printed to get an overall view
of the test, and data was rerun to narrow the focus to a particular time period. With
the ever increasing complexity and data rates of telemetry data, the standard
hardcopy outputs became unmanageable. Because of this, it was determined that
scaled data should be written to a file or tape and taken elsewhere for workstation
or PC analysis.
Data was created on a per user request. Because of this, many individual runs
were created, each run possibly containing similar parameters from other runs. To
eliminate the redundant parameter scaling, the idea of scaling all the data in a
telemetry stream and writing it to a file grew in popularity. A system needed to be
created to allow a test engineer to analyze his or her particular parameters. The
solution was the use of workstations and/or PC’s and off the shelf software
developed specifically for data analysis. In the past, workstations were fast but
prohibitively expensive and PC’s were slow and unsophisticated. Only within the
last five to six years have workstations become affordable and PC’s more
versatile. Several data analysis packages are now available which run on these
systems. DADiSP software was selected for post test analysis. It is a graphical
type program with each parameter of a telemetry stream displayed as a graph in
its own window. From one to one hundred windows can be displayed at a time.
Math functions, data reduction, overplotting and window manipulation can be
applied to each or all windows. Data from separate runs can coexist in one
session. Batch files and macros can also be used to automate sessions.
The combination of workstations and third party packages, creates a new powerful
tool for test data analysis. All data is available to all engineering, on-line and easily
accessible. Hardcopy processing is all but eliminated and used only for
presentation purposes. Because third party packages are used, they can be
integrated into a network of computers that already exist within a project.
These systems are built on the same philosophy as the previous systems, yet take
advantage of the latest release of hardware and software. The telemetry interface
module is essentially the same as the previous system except built to VME
specifications. The data format acquired into the computer is the same as in the
previous systems. However, since these systems are based on more open
standards, they are easier to design and implement. Resources are now
concentrated on the delivery of data, whether in the form of real-time displays or
post test analysis by way of third party software.
Future Systems
Currently, two new types of data acquisition and processing systems are being
implemented to support new projects. Both of these system’s goal is to further
reduce custom hardware and software development. The current systems
eliminate most of the custom written software for hardware control and post test
analysis. However, custom hardware still exists in the acquisition of differing
telemetry formats, and custom software is still used to scale the data for either
real-time displays or post test analysis. The new systems all but eliminate the last
bit of custom design. They are more of a complete “turnkey” system. Telemetry
hardware, acquisition software, parameter scaling, real-time displays and post test
processing are all integrated into one package.
The first major step these systems take is creating a common format for multiple
telemetry inputs of varying types. They do so by “id” tagging each word within a
telemetry stream. This has the effect of eliminating differences between formats.
They have also created interface boards which allow for any type of non MIL-STD
or IRIG standard stream to be acquired into the system. These streams are also id
tagged. Parameters, along with their id tags, are placed on a high speed bus.
Once on the bus, the differences between the telemetry formats disappear. They
exist on a common bus in a common format. At this point, anything can be done
with a parameter or mix of parameters. The data can be either output through a
digital to analog (D/A) converter, scaled, acquired or displayed; or, the data can
run through all these functions. Once a common format is created, a standard set
of software routines can be developed to process this data for real-time displays,
acquisitions or post test analysis.
The complete integration of telemetry hardware also allows the systems to grow in
function. These systems can now be used to control telemetry transmitters and
other external equipment. A flight termination system is being integrated into one
of these systems. There is no proprietary hardware or software. The simple fact
that the hardware and software comply with industry standards allows integration
of a flight termination system with little overhead or cost.
These systems also take advantage of off the shelf software. Both systems utilize
third party packages to help develop real-time displays. They also use the UNIX
operating system as the base system. The common windows environment (X11
and Motif) are used to eliminate proprietary software development. This also
permits vendors to retain systems open enough for future modifications and allow
some customization.
By using these systems, time and resources can now be spent on data delivery.
The acquisition and processing systems are no longer constraints to the analysis
process. The risk with this approach is that these systems, no matter how open
they claim to be, are still somewhat specialized. The high speed data buses are
not compatible between the two vendors and are not based on any industry
standard high speed bus. Therefore, the interface cards are not interchangeable
between systems.
Conclusion
Table 1 shows the evolution of past, present and future systems from distinct key
components to an integrated system. The combination of hardware and software
into a “turnkey” system allows the telemetry engineer to concentrate more on the
delivery of data to project personnel. Custom hardware or software development
is reduced to a minimum. Experience has shown that, although a few analysts
know what to do with their data, many do not. Simply acquiring and scaling the
data is not enough these days. Resources can be focused on working directly with
project personnel and helping them to better analyze the data they receive. Third
party packages are just now beginning to cater to the needs of test data analysis.
By improving this process, the overall project benefits from a reduced turnaround
time of data analysis and gains an increased knowledge of test data performance.
Also, as an industry, we can work on getting more standards developed for
turnkey systems. Proprietary high speed buses should be a thing of the past.
Several industry standard high speed buses, such as Raceway or PCI for VME,
currently exist. By conforming to these standards, telemetry board makers can
offer their products on more systems, and the turnkey system vendors can
concentrate more of their effort on a complete system. They have done so by
integrating third party packages into their systems. They could take the next step
and eliminate proprietary hardware interfaces.
Previous Systems Current Systems Future Systems
Telemetry Rackmount with manual VME card with software VME card integrated into
Hardware setup drivers a turnkey system.
Telemetry Custom built. Supports VME custom built. VME based high speed
Interface only 1 telemetry stream Supports only 1 bus, with multiple
telemetry stream telemetry streams.
Acquisition Custom written in C, Custom written in C Vendor integrated into
Software assembler, FORTRAN through third party API’s turnkey system
Real-time Custom written from Third party package Vendor integrated third
Displays scaling to displaying integrated via software party package into
API’s. turnkey systems.
Parameter Custom written. Custom written. Vendor integrated into
scaling turnkey systems allowing
for custom software
module integration.
Post Test Hardcopy tabular lists Network of workstations Network of workstations
Analysis and time history plots or with third party data with third party data
written to tape to be analysis packages. analysis packages.
analyzed elsewhere.
Computer MicroVAX Concurrent 7000 Any UNIX based system
Table 1
MIGRATION FROM VAX TO MODERN ALPHA COMPUTERS
Klaus R. Nötzel
ABSTRACT
Deutsche Telekom has been operating different communication satellites for several years.
The Satellite Control Center (SCC) of Deutsche Telekom is located near Usingen, about
50 km northwest of Frankfurt/Main. The system has been under operation since the launch
of the first flight model DFS in June 1989.
The entire computer system was based on Digital Equipment Corporation (DEC) VAX
type computers. The maintenance costs of these old Complex Instruction Sets Computers
(CISC) were increased significantly during the last years. Due to the high operational costs
Deutsche Telekom decided to exchange the operational computer system. Present-day
information technology world uses more and more powerful Reduced Instruction Set
Computers (RISC). These new designs allow operational costs to be reduced appreciably.
The VAX type computers will be replaced by DEC Alpha AXP Computers.
This paper describes the transition process from CISC to RISC computers in an
operational realtime environment.
KEYWORDS
Satellite Control Center, Digital Equipment Corporation (DEC) Computers, VAX, Alpha
AXP, operational costs, system design, MOTIF
INTRODUCTION
The Satellite Control Center (SCC) of DBP Telekom is located near Usingen, about 50 km
northwest of Frankfurt/Main. The system has been under operation since the launch of the
first flight model DFS in June 1989. The Ku-band acquisition of TV-Sat was performed in
August 1989, the acquisition of DFS 2 in July 1990. In 1992, the system was expanded for
the operation of DFS 3, which was launched in September 1992. The Launch and Early
Orbit Phase (LEOP) was supported by Deutsche Forschungsanstalt für Luft- und
Raumfahrt (DLR).
Besides the SCC, the earth station also has communication facilities for Eutelsat satellites
as well as for the Intelsat satellites over the Atlantic and Indian Oceans. The SCC is
composed of the spacecraft control facilities and the necessary ground stations at different
locations.
The SCC provides all necessary features for simultaneous and continuous operation for
four satellites, with expansion capability for two further spacecraft.
The Satellite Control Center of DBP Telekom is composed of
The system is distributed over several rooms and buildings. A redundant Ethernet based
Local Area Network (LAN) with standard protocols is used for interconnection of the
equipment.
The old computer system is mainly based on Digital Equipment Corporation (DEC)
hardware. The operating system is VMS. The system is equipped with a mid-size cluster
system and workstation computers. The flight dynamic system uses workstations from
Hewlett Packard (HP). The entire system runs 24 hours a day, 7 days a week. Figure 2
shows a diagram of the computer systems.
The first computers of the old system were VAX 11/750, dated 1986. New versions of the
application and operating system software had reduced margins in system performance.
The last upgrade of the station computer (STC) was to a dual CPU VAX 8350 cluster
system. In the following the system is described in detail.
TVSat DFS 1 DFS 2
RF RF M&C RF RF RF RF RF RF
Equip. Equip. Equip. Equip. Equip. Equip. Equip. Equip.
TT&C TT&C TT&C TT&C TT&C
M&C M&C M&C M&C M&C
TT&C-Baseband
TT&C-Baseband
TM-Chain(s) RG-Chain(s)
TM-Chain(s)
TC-Chain(s) M&C Equipment
TC-Chain(s)
RG-Chain(s)
M&C Equipment
Station Computers
Station Computer (backup)
Control Rooms Station Computer (prime)
TM-Processor DLR/GSOC
TC-Processor
RG-Processor EUMETSAT
M&C-Processor
Station computers: Three VAX 8350 computers in a cluster configuration with one
starcoupler (CI-cluster). Two VAX units in master/back-up
configuration process data in real time, the third unit is in standby as
cold redundancy.
Hard-disk system: Access to the diskfarm is controlled by two redundant HSC storage
controllers for common use of storage devices.
Workstations: Are used for processing and displaying primary real-time data. The
processed output of the workstations is displayed by thirty X.11-
window terminals in the control rooms.
LAN Network: Two physical Ethernet channels take over communication between all
computers in the SCC. Redundancy switching of the LAN is
performed by the STC and is transparent to the users. Two network
protocols are used, DECNet (between DEC Computers) and TCP/IP
between computers of different vendors. For external data
communication Wide Area Network (WAN) routers are used. These
routers perform also security checks for communication outside
Telekom SCC.
Communications: Data links are installed between Telekom SCC, DLR GSOC, DLR
Weilheim and EUMETSAT Control Center (Darmstadt). Data
communications is carried by ISDN (Integrated Services Digital
Network) and leased lines.
ISDN
The entire operational software must be rewritten for DEC Alpha AXP. More and more
open UNIX systems are used all over the world. This fact lead to a study of using an
UNIX operating system instead of OpenVMS. Arguments against UNIX were:
Ÿ No compatiblity with the old data archive, which must be supported for 10 years
Ÿ Personnel must be retrained for UNIX
Ÿ The cluster environment with its high availability is not available
Ÿ Costs for porting the software to UNIX are higher than porting from VAX to Alpha AXP
Ÿ Compatibility with the DLR GSOC control center will be lost
Finally, the cost aspects of porting the software and the necessary training for UNIX
systems lead to the solution of using DEC Alpha AXP systems with OpenVMS as
operating system.
Workstation A Workstation D
Satellite Control Satellite Control
A B C
Station
Computer
HSC 50 HSC 50
Workstation C RA 90 Workstation F
Ground Control Disk System Emergency
Current computer technology uses mainly two different architectures: Complex Instruction
Set Computers (CISC) and Reduced Instruction Set Computers (RISC). VAX is a CISC,
Alpha a RISC architecture. The basic differences between both architectures are shown in
Table 1.
The hardware design of the Alpha processors allows operation of different operating
systems on one machine (OpenVMS, DEC UNIC, WINDOWS NT). That means, to use
another operating system, normally only the new operating system has to be installed. The
hardware itself supports different operating systems.
Alpha computers use a 4-byte fixed length data format. Data that do not fullfill this
requirement can be converted, but this conversion needs up to 20 times more CPU time
than the same instruction executed on a VAX system.
These circumstances may lead to a entirely new design of the software implementation.
VAX (CISC) Alpha (RISC)
Architecture CISC RISC
Instruction format complex variable length simple fixed length
Addressing mode numbers large small
Size of registers small-to-medium large
Instruction set model register-to-memory load-store
Execution of instructions micro-coded direct in hardware
Virtual address range 32 bits max. 64 bits
Physical address range max. 32 bits max. 48 bits
Page size 512 bytes 8KB - 64 KB
Instruction length 1 - 51 bytes 4 bytes
General registers 16 x 32 bits 64 x 64 bits
Addressing modes 21 3
NETWORKING
One of the mayor requirements for the new system was, that it must be compatible with the
existing hardware and software. The telemetry and telecommand baseband is equipped
with PDP11 computers running DECnet Phase IV. A wide variety of protocols is used at
the Usingen SCC:
Ÿ DECnet Phase IV
Ÿ DECnet Phase V only for Wide Area Network (WAN) routers to external partners
Ÿ TCP/IP between all othe systems (HP UNIX, PC’s)
The network protocol compatibility will be ensured by different DEC network products:
COMPATIBILITY ASPECTS
The real-time VAX computers only will be replaced. All other systems, baseband and
computers, must remain and work properly together with the new Alpha AXP computers.
The new system must be compatible with all the other software and hardware interfaces as
The range of the remaining computer systems varies from DEC PDP11, DEC VAX 11/750
over DEC MicroVAX and Hewlett Packard UNIX workstations to DEC VAX stations.
The entire data generated at the SCC like telemetry, telecommand, monitor and control
information are stored on 8 mm Exabyte tapes. These old tapes must be readable by the
new system.
The new archiving software supports the old file formats and new more efficient storage
formats.
The new system is also based on a DEC cluster environment. The CI - cluster environment
was not available for PCI Bus based machines, the cluster is built via a DSSI bus.
Three servers with the same performance form the main station computer. The cluster
configuration itself is formed by two servers and a quorum disk. The third server will work
as
Workstation A Workstation D
Satellite Control Satellite Control
Old New
Operating System OpenVMS VAX OpenVMS AXP
Station computer 3 * VAX 8350 2 * AlphaServer 2000/233
No. of CPUs 2 of 2 possible 1 of 2 possible
Memory 32 MByte 128 MByte
Performance 2.3 VUP (SPECint92: ~ 2.3) SPECint92: 177, SPECfp92: 215
Disk system 6 * RA 90 with HSC 50 MTI StorageWare RAID
Controller HSC 50 Stingray DSSI redundant
Configuration Shadow set RAID level 5
Capacity 3,6 GByte 17,2 GByte
Workstations 6 * µVAX 3200 6 * AlphaStation 200 4/233
Disk capacity 2 * 154 MByte 2 * 2 GByte
Performance 2.4 VUP (SPECint92: ~ 2.3) SPECint92: 157, SPECfp92: 183
Windowing software DEC VWS DEC MOTIF
Emergency System µVAX 3100 AlphaServer 2000/233
Memory 32 MByte 128 MByte
Disk capacity 2 * 500 MByte 2 * 2 GByte
Performance 3.5 VUP (SPECint92: ~ 3.5) SPECint92: 177, SPECfp92: 215
The old user interface was based on colour character oriented terminals (DEC VT 340).
Figure 5 shows an example of a new display format.
Tigris
File View Options Input Scroll LinePlots Help
Errors: 0 Status... DOY: 118 GMT: 09:05:23
DOWNLINK 1
ANTENNA TELEMETRY
System System
DOWNLINK 2
Antenna
AZ: 0.00 TEST RANGING COMPUTER
EL: 5.60 Equipment System System
UPLINK 1 TC 1
UPLINK 2 TC 2
The old character based interface will be still available. New display formats will only be
implemented for the new system.
The new system provides a modern window oriented graphical user interface (GUI). The
GUI uses a MOTIF application on standard X-window terminals. The interface application
is built by commercial of-the-shelf products (TeleUse, Sphinx). The new GUI is a very
flexible tool. The user can easily create and add new display pages. Dynamic symbols can
be created to visualize the status of systems or devices by different colours or graphic
elements.
OPERATIONAL COSTS
The old computer system is based on technology of the year 1986. The costs for
maintaince contracts are increasing every year. Many old components are no longer
available. Not in each case is it possible to use the new “compatible” components, e.g.
problems with networks cards were detected.
The new system comes with a three-year warranty, which means that there no additional
costs will arise for maintance in the first three years of operation.
The operational costs for the old system were so high that the break-even point for the new
system will be reached within 30 months. This includes all hardware and software.
OPERATIONAL TRANSITION FROM THE OLD TO THE NEW SYSTEM
Satellite control is a mission critical application. The operational switch-over from the old
to the new system is plannedto take place in two steps.
• First, the final acceptance of the new system must be successful. Both systems will be
fully operational in parallel for two months. This time is necessary to identify hidden
errors and non documented features.
• Second, after a period of thirty days with no or only minor failures operations will be
switched to the new system. Only the old emergency system will be kept for back-up
purposes.
RFERENCES
Richard L. Sites, “Alpha Architecture Reference Manual”, Digital Press, ISBN 1-55558-
098-X, 1992
INTERACTIVE ANALYSIS AND DISPLAY SYSTEM (IADS) TO
SUPPORT LOADS/FLUTTER TESTING
ABSTRACT
The Interactive Analysis and Display System (IADS) provides the structures flight test
engineer with enhanced test-data processing, management, and display capabilities
necessary to perform safety critical aircraft analysis in near real time during a flight test
mission. Germane to hazardous, fast-paced flight test programs is a need for enhanced
situational awareness in the Mission Control Room (MCR). The IADS provides an
enhanced situational awareness by providing an analysis and display capability designed to
enhance the confidence of the engineer in making clearance decisions within the MCR
environment. The IADS will allow the engineer to achieve this confidence level by
providing a real-time display capability along with a simultaneous near real-time
processing capability consisting of both time domain and frequency domain analyses. The
system provides for displaying real-time data while performing interactive and automated
near real-time analyses. The system also alerts the engineer when displayed and non-
displayed parameters exceed predefined threshold limits. Both real-time data and results
created in near real-time may be compared to predicted data on workstations to enhance
the user’s confidence in making point-to-point clearance decisions. The IADS also
provides a post-flight capability at the engineers project area desktop. Having a user
interface that is common with the real-time system, the post-flight IADS provides all of the
capabilities of the real-time IADS plus additional data storage and data organization to
allow the engineer to perform structural analysis with test data from the complete test
program. This paper discusses the system overview and capabilities of the IADS.
KEY WORDS
The Interactive Analysis and Display System (IADS) is being developed for the Air Force
Flight Test Center (AFFTC) at Edwards Air Force Base by a team of Air Force and
SYMVIONICS, Inc. software engineers to increase the efficiency of the flight testing
process. The flight test engineers in the Mission Control Room (MCR) primarily monitor
data for safety-of-test considerations and for data quality because the data is later
evaluated to determine aircraft specification compliance. The IADS provides the engineer
with advanced data organization, processing and display capabilities, both in the MCR and
at the office desktop. Previous real-time structural analysis systems were very time
consuming and limited, so most of the flutter testing analysis was conducted in a post-
flight environment. At critical test conditions, real-time flutter clearance decisions came
very slowly by using interactive analysis techniques designed for a post-flight analysis
environment. Stripcharts were the main tool for displaying time domain data. A flutter
analysis system running on one independent workstation provided spectral analyzer tools
such as real-time power spectral density (PSD) and Nyquist plots. For loads testing,
stripcharts were used as the primary data display source. In many cases, loads analysis
decisions required the engineer to hand plot peak loads from the stripchart time histories
onto paper cross-plots containing design load limit envelopes.
The primary source of post-flight data for the structures engineer was analog tapes
processed after the completion of the test mission. This process introduced delays into the
test-point clearance process, ranging from several hours to days. The engineers at the
AFFTC are now being faced with program objectives which require much higher flight test
efficiency rates than in the past, and these rates cannot be supported with the previous
analysis systems. The engineers need to make quicker clearance decisions based on more
detailed MCR analysis, and analysis results obtained during the flight test mission must be
made available to the engineer at the desktop within a short period of time after the flight.
In response to these needs, the AFFTC structures test community developed a set of
operational requirements for the next generation of structural test analysis systems. The
IADS is being developed to meet these requirements. The purpose of the real-time IADS is
to provide the engineer sufficient resources within the MCR to allow enhanced safety-of-
test monitoring and advanced near real-time analysis capabilities to support test-point
clearance decisions in a timely manner. The post-flight IADS is designed to provide
advanced data organization and processing at the engineers desktop.
The IADS provides for displaying real-time data while simultaneously performing manual
and automated near real-time analyses. Summary reports in the form of plots and tables are
available during and after the test flight, and these reports can be updated both manually
and in an automated fashion. The IADS provides the capability for automated analytical
processing. These can be triggered and/or driven by either user entered data or telemetered
parameters (e.g., flutter excitation system state parameters), or both. The IADS allows the
engineers to transport selected Engineering Unit test data from the MCR back to their
desktop for use in post-flight analysis. The requirement to obtain data from the aircraft
tape before next-flight clearance is no longer necessary. Rather, the engineer will use data
collected in real-time and transported to the post-flight IADS to make timely analysis
decisions. Aircraft tape data will be used only to supplement the data collected in real-
time, and the amount of data requested from the aircraft tape will be significantly reduced,
thus saving time and money.
SYSTEM OVERVIEW
The IADS is comprised of two primary configurations: real-time and post-flight. For each
configuration, the IADS has an architecture which combines both data-driven and
client/server elements. The primary real-time data path through the IADS, from acquisition
through display, is data driven. Data is sent from the Telemetry Preprocessor to the
Compute Data Server, where it is stored, buffered, and filtered (if applicable). The data is
then forwarded to the Display Station for presentation to the engineer. The engineer may
also initiate near real-time analysis requests from the Display Station. These requests are
processed in a client-server fashion by the Compute Data Server, which then returns the
results to the Display Station. Each of these configurations provides consistent capabilities
through common software and hardware. As shown in Figure 1., each IADS configuration
consists of four major components: Compute Data Server, Data Distribution, Display
Stations, and Group Data Storage.
The Compute Data Server (CDS) provides both data management and computational
capabilities. The CDS is a multi-processor high-speed UNIX-based computer which
supports both the real-time and post-flight processing needs of the IADS.
Data Distribution provides the distribution of real-time and analysis data to the display
stations from the Compute Data Server. In the real-time configuration, an Asynchronous
Transfer Mode (ATM) network provides the needed throughput for data distribution. In
the post-flight configuration, Fiber Distributed Data Interface (FDDI) is used. The IADS
data distribution software is designed such that it may be modified to use any packed-
based technology.
Display Station provides the graphics and user interface to perform real-time processing.
UNIX graphics workstation are used for the real-time configuration and Windows NT
workstations are used for the post-flight configuration. Each display station is capable of
displaying up to 100 parameters simultaneously at data rates of 800 samples per second.
The maximum data rate is 4000 samples per second, however only 20 parameters can be
Ethernet
Setup & Configuration
Information
Display Station
Up to 10 Display Stations
EU Data from Telemetry
Preprocessor
Display Station
Display Station
Compute Data
Server
monitored at this rate. The display update rate is at least 30 updates per second, depending
on analysis window complexity.
Group Data Storage retains the information requested by the engineers for future analysis.
The system has the capability to store up to 90 minutes of data (200 parameters at 800
samples per second) during real-time operations. This data, along with setup information
and analysis results data, is transferred to the post-flight configuration via removable
media. This media is installed into a large jukebox for multiple flight access in the post-
flight environment.
In order to meet the needs of the user community, the development team chose an object-
oriented, incremental, and iterative development process which involves the user
community in all phases of the development, from requirements analysis through system
testing. The process accommodates successive refinements of the system as the user
community has opportunities to evaluate incremental builds. This process allows the
development team and user community flexibility in meeting requirements which change
and mature as the system is developed.
CAPABILITIES
Currently, the IADS is sized to support ten Structures engineers simultaneously, providing
the following capabilities:
• both interactive and/or automated time and frequency domain analysis and display
• processing of an aggregate data input to IADS of 200,000 samples per second
• scratch storage for each user of at least 90 minutes for up to 100 parameters
• workgroup storage on a walk-away media of at least 90 minutes for up to 200
parameters
• post-flight report quality plot environment
As described in more detail in the following sections, the IADS provides the engineer with
capabilities in the areas of analysis, display, data organization, and report quality plotting.
Analysis Capabilities
In the area of test data analysis, the IADS provides capabilities to the Structures engineer
at the workgroup and individual level. The IADS analysis capabilities include algorithms
for both time and frequency domain processing, in real-time and near real-time, for both
interactive (manual) and automated modes. The engineers may choose to use the
workgroup level settings for the system or may chose to override these with their own
settings, including digital filtering, sample rate decimation, parameter threshold limits, and
parameter scaling. The engineer may also change display types, change the size of an
individual display, or drag and drop new parameters into existing displays, all of which can
be performed on the fly in real time. The engineer may perform analysis on real-time data
from the current test-point or on data from previous test points within the current or
previous missions (post-flight). The engineers may combine their analysis results with
results from others within the workgroup. The engineers may monitor real-time
parameters, both time and spectral, or they may chose to freeze a set a parameters and
perform more detailed analysis. This detailed analysis may be of time or frequency domain
data, including:
Display Capabilities
The IADS provides the engineer with a display tool set optimized to aid in the task of data
analysis and situational awareness, and displays the test data in a format which allows the
engineer to efficiently and confidently make test point clearance decisions. This display
tool set includes high fidelity time history displays eliminating the need for strip chart
recorders. The engineer can fully interact with the time history display to freeze and scroll
back in time, zoom, annotate/mark events and select data points on which to perform
requested analysis. These interactive capabilities are available through drag and drop,
toolbars, mouse selections, and menus.
In addition to time history displays, IADS provides numerous display tools for the engineer
to perform these analyses. These include:
The IADS displays analysis results in the form of summary plots and summary tables. The
IADS plots the results of the predefined primary analysis method(s) for each summary plot
and summary table. These results can be used for comparisons of current as well as
previous flight test results to analysis model predictions, and to establish trends in the data.
Results from secondary analysis method are also available to the engineer to compare with
predicted data.
Efficient data organization within the IADS is essential to support the analysis and display
capabilities described above and to provide the engineer with the flexibility in what is
brought into and carried away from the MCR. The IADS provides the engineer with the
mechanism for importing data from previous flights and results of analytical computations
into the MCR for comparison with the current mission test data. Ongoing summary
analysis may also be imported, with results from the current mission added to the summary
information. The IADS provides each user with sufficient scratch (temporary) storage to
store at least 90 minutes of test data for up to 100 separate parameters. This data is
available to the engineer for further analysis and for comparison (overlay) with real-time
data. The IADS provides the workgroup with a “walk-away” storage capability of at least
90 minutes of test data for up to 200 parameters. The workgroup may use this storage to
transport test data of interest, user configurations, analysis results, and other necessary
data from the MCR back to their desktop for further analysis. The post-flight IADS
provides the engineer with additional data organization and archival, allowing the engineer
access to all of the data accumulated during the entire test program.
Jil Barnum
ABSTRACT
Hierarchical Data Format (HDF) is a public domain standard for file formats which is
documented and maintained by the National Center for Super Computing Applications.
HDF is the standard adopted by the F-22 program to increase efficiency of avionics data
processing and utility of the data. This paper will discuss how the data processing
Integrated Product Team (IPT) on the F-22 program plans to use HDF for file format
standardization. The history of the IPT choosing HDF, the efficiencies gained by choosing
HDF, and the ease of data transfer will be explained.
KEY WORDS
Hierarchical Data Format (HDF), Format Standardization, F-22 Avionics Data Processing.
INTRODUCTION
Utilizing HDF as the standard file format will allow analyzed data to be used and reused
by Commercial Off The Shelf (COTS) analysis programs at any team facility without
costly or time consuming format conversion. Due to standardization, laboratory data,
Flying Test Bed data, and flight test data will be easily transferable between locations,
aiding in model updating and validation. Laboratory data obtained using validated models
will be used along with flight test data to prove specification compliance. Utilizing
laboratory data to prove specification compliance will greatly reduce the cost of flight test.
Key to using laboratory data is being able to readily compare results with F-22 open air
test results. HDF format will allow this comparison without costly reprocessing of data.
USING HDF
HDF was chosen by the Data Processing (DP) IPT on the F-22 program for file format
standardization. The DP IPT plans to make the HDF standard data available to all
laboratories that process F-22 avionics data. Global PI bus, High Speed Data Bus, 1553B,
as well as inter module (Ada) messages of interest will be recorded on an Ampex DCRsi
240 Mbps recorder, through the use of a Harris Data Acquisition Unit multiplexer. The
Data Acquisition Unit selections will be programmed by the Operational Flight Program
via a Measurement Activation Table (reference Figure 1. Using HDF).
Global Pi bus,
Harris Ampex
High Speed Data bus Byte
Data Acquisition Unit DCRsi 240
Inter-module Messages Parallel
(Mux/Demux) Recorder
1553B
Boeing written Ada code called Evaluation and Control (E&C) software will be used by all
laboratories, and in flight test to select, demultiplex, and convert the data to Engineering
Units. Post E&C processing is the point in the process where the Data Processing IPT will
use the HDF standard. Laboratories are dependent on the data acquisition unit to record
requested data on the DCRsi, and E&C software to retrieve and make sense of requested
recorded data. By taking advantage of laboratory dependencies on existing hardware and
software standardization (Data Acquisition Unit and DCRsi hardware, and E&C
software), the DP IPT plans to achieve standardization to HDF without having to pre-
coordinate with each laboratory. Generation of E&C parameter files trigger creation of
HDF header files. HDF header files allow the requester of the data to point to the data of
interest using COTS software, in particular PV~Wave. By following this plan the DP IPT
plans to achieve HDF compliance. Only one copy of the data will actually reside in
storage, but multiple users will be pointing to that data for use by their COTS analysis
routines.
HISTORY OF CHOOSING HDF
Early in the program the F-22 Avionics team realized that to be successful in the planned
build-up approach, flight test data would be required at each laboratory to validate models
and predictions. In May 1994 the DP IPT started researching file format standards. A
technical interchange meeting was held to consider file formats and their effects on data
processing operations. The technical interchange meeting also addressed COTS products
like PV~Wave and how they might be used by avionics engineers for analysis.
Various file formats were discussed: glass file formats, a Lockheed proprietary format;
universal data format, which was the file format used by the Lockheed Martin team in the
Demonstration/Validation portion of the F-22 program; flat files such as SANDS and C-
file used by an Air Force Flight Test Center at Edwards Air Force Base; Z and W files
which use external data representation protocol permanent files; HDF, common data
format, and Net common data format, which are public domain standards. A robust
common data format was deemed necessary by the DP IPT to insure data consistency from
laboratory to laboratory to flight test and back to laboratories. HDF began to emerge as the
choice with the best chance of success.
The avionics DP IPT tasked the AFFTC Range Software Development Branch lead by Dr.
William Kitto, to analyze the HDF file format and make a recommendation to the IPT on a
standard file format. Dr. Kittos’ team was directed by the F-22 DP IPT to test the
candidate format on the DEC Alpha platform, using PV~Wave as the interface tool. Initial
prototypes identified HDF performance concerns, but in early 1995 Visual Numerics Inc.
provided an HDF interface to PV~Wave at no cost to the F-22 program.
Test cases were conducted using both SUN and DEC Alpha computers. Test data included
five different types of data: multi file, native float, external data representation integer,
native integer, and single file formats. Various read/write scenarios were tested using C,
FORTRAN, and PV~Wave interfaces. Analysis of the results of HDF performance on
both the SUN and DEC Alpha computers showed that utilizing HDF would not
significantly impact performance.
There were added benefits of selecting HDF over any other format considered. HDF is
public domain software and is free to anyone who wishes to use it. HDF is the National
Super Computer file format standard and is being used by projects which handle larger
quantities of data than the F-22 expects. As platforms and/or interfaces change over the
long term F-22 program the benefits of choosing HDF will continue to grow because it is
not computing platform dependent. Maintenance and hosting HDF will be supported by the
National Center for Super Computing Applications, University of Illinois, Urbana Campus,
at no cost to the F-22 program. After completing the compatibility and performance
analysis, Dr. Kitto recommended HDF to the team. In March 1995 the Data Processing
IPT led by, Dean Fox of Lockheed Martin, selected HDF as the F-22 standard file format
for both airframe and avionics. Today prototype software is running which builds HDF
header files for E&C parameter file output so PV~Wave can access the data.
PROPOSED IMPLEMENTATION
Prototype software was used to evaluate the efficiencies of the proposed HDF
implementation. Avionics analysis engineers were able to request specific recorded data to
be retrieved from the recorder for post flight analysis. Requested data will be
demultiplexed by the data acquisition unit, and pre-processed by E&C software. Parameter
files and HDF headers can be produced for any parallel interface message data requested.
A single HDF header file will be built for each request. The HDF header will point to all
E&C parameter files required to fulfill that users request. Since multiple users can request
the same data, HDF headers share the parameter files, therefore parameter files will be
stored only once on the DP storage system (reference Figure 2. Hierarchical Data Format).
HDF
Parameter
Header 1
File 1
HDF
Header 2
Parameter
File 2
HDF
Header 3
Parameter
HDF
File 3
Header 4
HDF
HDF Parameter
HDF
Header 5
Header 5 File 4
Header 5
Efficiencies gained by choosing HDF are numerous. Metadata is a term used to document
data about the data, and was considered a useful feature of HDF. For example, if the data
is a test result, then the metadata is the circumstances about the test. More specifically,
data includes airspeed, altitude, and heading whereas metadata includes project F-22,
airplane 4004, test 10, altitude band, and pilot notes. Flight test data is typically first
generation processed to engineering units with simple unit conversions made yielding
plots. Second, third, and fourth generation processing may be required to determine if the
data collected achieved the predicted results, validated the models, and proved
specification compliance. Metadata logs the information about how results were processed
over successive generations, what new measurements were created, and what limitations
the test had. The volume of data will actually decrease for each processing stage, but the
volume of metadata increases with each processing stage. HDF allows for metadata
tracking either included as attributes with the data or external and linked to the data. F-22
HDF uses attributes to store metadata which allows data files to be self contained.
Therefore, all information required to regenerate the results is maintained along with the
data itself in HDF attributes.
PV~Wave, Mathmatica, and other COTS analysis tools can enhance efficiencies of using
HDF by automatically merging together any data which exist in an HDF format. Target
time space position information, radio frequency truth (spectrum), and radio frequency
mode data can be merged with F-22 target time space position information, PI bus, 1553B,
and transducer data by the engineer using PV~Wave in an interactive or batch mode.
Data merge for flight test missions is not a trivial task. F-22 aircraft data will be compared
to other types of time tagged or event driven data. All types of data used for F-22 post
flight are required to be provided or converted to HDF. Target time space position
information, range data, radio frequency truth, and non F-22 data products will be
delivered to the F-22 DP IPT in HDF. Therefore, HDF is by default becoming a post flight
standard format for various test ranges like Edwards and China Lake. COTS analysis
routines such as PV~Wave and Mathmatica can use any HDF formatted data
automatically. No data merge is required by data processing because the COTS analysis
routines will align timelines for multiple data by interpolation. The COTS can be run either
interactively by the analysis engineer or batch by the DP IPT.
The ease of data transfer is the greatest efficiency gained by format consistency.
Specialized tools will not be required at each facility to pass data from one facility to
another. For example, flight test data will be used to validate the models used in
simulation. Since laboratories and flight test are all using HDF, flight test data can be
easily fed back into any laboratory without requiring data reformatting. An “apples-to-
apples” comparison of data can be made. Once data at any facility is recorded on the
DCRsi by the data acquisition unit, requested data can be pulled off the recorder into a
parameter file by E&C software in an HDF format, and used by COTS analysis tools at
any team location.
The F-22 modeling and build-up approach minimizes the requirement for actual flight test
data. Less expensive laboratory data will be used to prove specification compliance, while
costly flight test data is used primarily to validate models.
Applying the HDF standard requires a specific implementation be adopted by the F-22
program to increase efficiency of avionics data processing and utility of the data. The
Data Processing IPT on the F-22 program is developing an HDF interface control
document. The interface control document will further clarify HDF file format
standardization to the scientific data set to gain efficiencies and ease data transfer.
Additionally, the interface control document is required to describe standard attributes.
Once the interface control document identifies standard attributes any team location could
independently decipher how the data was processed and reproduce the same results.
CONCLUSION
Testing of F-22 avionics will utilize a modeling and build-up approach to minimize use of
costly assets. Unit and integration testing will be conducted at integration laboratories,
simulation laboratories, on Flying Test Bed laboratories, and during Flight Test. Digital PI
buses, High Speed Data Buses, and inter-module messages will be instrumented to record
data digitally.
HDF is a public domain standard for file formats that is documented and maintained by the
National Center for Super Computing Applications at the University of Illinois, at Urbana
Campus. For more information, connect to the Internet and use a WEB browser to search
for HDF to acquire full documentation.
Abstract
The challenge being faced today in the Department of Defense is to find ways to improve
the systems acquisition process. One area needing improvement is to eliminate surprises in
unexpected test data which add cost and time to developing the system. This amounts to
eliminating errors in all phases of a system’s lifecycle. In a perfect world, the ideal systems
acquisition process would result in a perfect system. Flawless testing of a perfect system
would result in predicted test results 100% of the time. However, such close fidelity
between predicted behavior and real behavior has never occurred. Until this ideal level of
boredom in testing occurs, testing will remain a critical part of the acquisition process.
Given the indispensability of testing, the goal to reduce the cost of flight tests is well worth
pursuing. Reducing test cost equates to reducing open air test hours, our most costly
budget item. It also means planning, implementing and controlling test cycles more
efficiently. We are working on methods to set up test missions faster, and analyze,
evaluate, and report on the test data more quickly, including unexpected results. This paper
explores the moving focus concept, one method that shows promise in our pursuit of the
goal of reducing test costs. The moving focus concept permits testers to change the data
they collect and view during a test, interactively, in real-time. This allows testers who are
receiving unexpected test results to change measurement subsets and explore the problem
or pursue other test scenarios.
Key Words
Introduction
The goal of flight test is to determine if the air vehicle behaves as expected. If the aircraft
conformed to the pre-existing models and to its specification, the data would be boring,
with no unexpected results. So far, boredom has not overwhelmed any major flight test
project. Despite the increasing accuracy of models and simulations our ability to predict
flight test results is far less than 100%. The fact is that the fidelity of our test models
cannot yet match reality.
Unexpected test results can also be caused when synchronizing the many input sources
that make up the data flow for flight test. The goal is for test data systems to flawlessly
process accurate data, introducing no surprises of their own. Unfortunately, boredom has
not overwhelmed test data systems operations either.
Figure 1 shows a highly simplified view of the data flow for one common configuration
used in flight test. The aircraft being tested is one source of information about the test,
there are as many other sources as there are systems sensing and relaying the data. For a
tester to understand what is happening during the test, information is also needed about the
capabilities and programming for the airborne instrumentation, for the ground support
equipment, for the positioning information systems, and for the data communication
devices that connect them.
The challenge being faced today is how to improve the test data acquisition process end-
to-end. Looking closer at Figure 1, there are opportunities to improve the systems
acquiring the data. In three major functions telemetry is the glue that binds the reality of
the test vehicle to the virtual world of the mission control room. Telemetry from the
aircraft to the ground receiving systems relays on-board data from sensors and data bus
monitors, telemetry from global positioning system (GPS) is received and processed on-
board the aircraft to provide positioning information, and that location information can be
telemetered down to the ground receiving stations too. As a result of the enormous amount
of data being collected and transmitted, telemetry may also be the limiting data flow in the
flight testing process. The analogy is similar to connecting a fire hydrant and a fire engine
with a small diameter garden hose. There is a huge supply, a very large demand, but only a
very small flow between them. The problem is exacerbated by recent frequency sell-offs.
As a result telemetry bandwidth has become a increasingly scarce resource. This paper
explores one method of improving data flow given the increasing telemetry bandwidth
constraints.
There is no question that the air vehicles we are testing at the Air Force Flight Test Center
are more complex than ever before - as well as faster, more agile and more sophisticated
electronically. These capabilities are the direct result of software. The amount of software
code that support flight control systems, avionics systems, and even propulsion systems
has skyrocketed. Modern aircraft are flying computers - actually, they are more like flying
computer networks. The effect of this is that in order to test the aircraft, the number of
DATA FLOW FOR FLIGHT TEST
PORTABLE UNIT TO
LOAD AIRCRAFT
PROGRAMMING TO
IMPLEMENT TM PLAN
GLOBAL
POSITIONING
(di efligh
SYSTEM
pr
rec
t lin t)
INSTRUMENTATION,
k
CALIBRATION AND
TCP/IP over Edwards Area Network and Local FDDI Nets & Ethernets
TELEMETRY DATA
(There are many of
these, at least
one/project. May be
database or other
AIRCRAFT (UNIT UNDER TEST)
format)
Has sensors, receivers, data
encoding & multiplexing, voice,
PCM and/or FM transmission
capabilities, also records to
onboard tape unit
GROUND RECEIVING
SYSTEMS (Including radars,
video, cinetheodolites, antennas, etc
not shown here)
link t ty of
)
ypes
GROUND SUPPORT Data streams separated and
(varie
SETUP INFORMATION switched
DATABASE FM Demodulated
(One for all shared Test Video/Audio Switched
Range Mission Control
Room Resources. Retains
current & historical setup HIGH SPEED NETWORK
data for current projects,
provides traceability)
TELEMETRY PREPROCESSOR
AND DATA DISTRIBUTION
The number of possible failure modes for any given aircraft is a function of the number of
interfaces between its components. Even with thorough testing of each component,
unexpected problems can arise when the components are assembled into the system
[Brooks, 1995]. We need methods to insure that at least all the modes encountered during
routine operation of the system hold no surprises, plus we need to test as many non-routine
modes as dollars and time permit.
Flying hours are the major expense for a test program, so as much as possible must be
squeezed into each one. While the aircraft is in the air, the real-time data systems will
display only a small subset of the possible things being measured on the aircraft itself.
Current systems are limited to being preprogrammed and then providing a fixed, static
subset of measurements. As already discussed, the number of measurements in that subset
is limited by bandwidth.
If test planning were perfect and expected data were the only test results with no
unexpected surprises, this fixed subset of measurements would suffice. However,
unforeseen conditions and genuine test phenomena frequently occur. Re-flying tests to
check out phenomena that occurred on earlier tests is common. If data systems could be
reprogrammed in real-time, in response to unexpected phenomena or data, the cost savings
could be significant because the cost of re-flying the test would be eliminated.
If test conductors were able to swap measurands into/out of active real-time displays, the
tester's need to understand what is happening would be satisfied also. As well, bandwidth,
a scarce resource, would be saved by replacing parameters rather than including all
parameters and changing that subset on the fly. Subsets could change when the focus of
the test mission changes and when something is surprising (including instrumentation
failure or an unexpected interface problem). This moving focus concept would require that
enough engineering support information be available in the mission control room to enable
real-time informed decisions about flight safety.
To make the moving focus concept a reality, telemetry systems on the ground and in the
vehicle under test will need a new kind of cooperation. The current technology supports
control room displays that provide a limited moving focus concept, highlighting data of
interest out of the transmitted data stream. However, on-board systems are not yet that
flexible. Ideally, the concept would be expanded to enable the on-board system to instantly
reconfigure the content of the transmission when a problem occurs. When everything is
nominal, a certain stream would be sent, but when a surprise is detected either in the air, in
the control room or anywhere in the data acquisition system, a focused set of measurands
could be swapped into the transmitted stream. This amounts to adding the ground system
as another node on the network of subsystems under test.
The Need for Traceability
The current technology of test planning requires a step-by-step process with strict
configuration control. This approach requires a huge amount of logistical support. As a
result, existing systems are designed for setting up once and making only small incremental
changes thereafter. The moving focus amounts to adding another node to an already
complex computer network. The additional complexity will force the development and use
of sophisticated database replication and synchronization techniques. Implementing a
moving focus methodology will also require a higher level of automation, including
algorithms to generate telemetry formats given only information about the measurements
they contain [Jones, 1996; Samaan, 1995]. It will also require better bookkeeping, since
traceability will be even more critical.
Traceability is of growing importance because the data gathered during the actual mission
serves two kinds of test analysis. In addition to the real-time envelope expansion discussed
above, any anomaly uncovered will need a deeper analysis. The understanding of the data
must include the traceability to all the manipulations and transformations that have
happened between the origination point and the end result. The pyramid-based method of
iterating to final decisions about the effectiveness of the system under test is based on
increasing levels of analysis, evaluation, and abstraction as illustrated in Figure 2 [Crouch,
94]. Each higher layer summarizes the data below it, with information loss and increasing
uncertainty as the analysis becomes more abstract.
Measures of
e.g. reliability, Performance
maneuverability
System Performance
e.g. logitudinal stability, Mach Parameters
number, Center of Gravity
Technical Performance
e.g. altitude (ft), speed (kph), Parameters
temperature(C), pitch (deg)
(engineering units)
This concept is similar to the data warehousing concepts now being exploited by the
commercial sector. Just like Chief Executive Officers, flight testers need to see both the
big picture and to have access to the details in order to make informed decisions about
dynamic processes.
The moving focus concept breaks the model of the inflexible, statically instrumented test
system. It presents the test data as a flow in a network where test item and test conductor
are interactive - two communicating, cooperating nodes. This is a very different view from
the static, incrementally changing paradigm designed into most modern telemetry
processing systems. To make the moving focus concept a reality, methods for rapidly
reprogramming these systems will be required.
This change from a pre-planned, static situation where the test item transmits and the
ground system receive, to a dynamic, cooperative network model means stretching the
capabilities of the systems that automate test system and test support system setup. Our
current setup processes still have many manual, labor-intensive steps: building, validating,
and verifying control room displays and software; test article instrumentation system
checkout; network configuration; etc. For the moving focus to work, these systems must
become capable of reconfiguring the test article and the control room on the fly.
The traceability requirements of the moving focus concept will drive the need for thorough,
real-time record keeping. Instead of having to maintain a history of discrete snapshots of
the configuration of the test article and the ground-based test support systems, one for each
test, a dynamic record of every change to every system in the test network must be kept
current. This resembles the complexity and concerns of on-line transaction processing
systems, including database synchronization issues, more familiar to business managers
than flight testers. In addition, the control and configuration management of this network
has direct implications on safety of flight.
Safety and risk present the greatest barriers to implementing the moving focus. To mitigate
flight test risk, end-to-end data processing validations are conducted pre- and post-mission.
To provide this same confidence, methods and systems that can support real-time, on-the-
fly validations need to be added to the already complex test data network.
There are other proposals for improving data flow in the flight test process. One of the
most promising is on-board processing. Computers on the test vehicle could pre-process
data prior to telemetering it, thereby reducing the required bandwidth. On-board
processing really means distributed processing. Data processing could be distributed
equitably across the test network nodes. On-board data processing presents another level
of complexity including traceability, validation, and configuration management challenges.
Conclusion
The moving focus concept has been presented as a method for improving data flow in the
flight test process. This concept provides for interactive testing, with the test article and
the test conductor seen as cooperating nodes on a computer network. To realize this
concept, many systems on the network will require changes. Among them, the telemetry
processing systems will need a different intended use paradigm. Telemetry vendors can no
longer provide slow, manual system setup functions. Total telemetry system
reconfiguration may be required in a matter of microseconds. This major shift in usage
may be especially difficult to support industry wide because only a few high-end telemetry
customers who require massive data streams will benefit from the moving focus concept.
The greatest challenges of the moving focus concept are those related to validation and
configuration management. All the systems on the network that are not under test must
deliver high-confidence, highly-reliable results, without surprises. It is not possible to test
the accuracy and reliability of systems under test using inaccurate, unreliable test support
systems. The test systems need to be as error-free as possible. This kind of reliability could
be achieved through the use of automated software test tools during the development
lifecycle of the test support systems.
The pressures to reduce costs and system development time will continue to drive high-end
telemetry customers who test highly complex systems requiring massive data streams to
seek methods to optimize bandwidth. The moving focus is but one such relief mechanism
that has the added benefit of allowing the tester to understand fully what is happening
during a test. As a community, we need to continue to explore alternatives to reach our
goal: The models and simulations we use and the test support systems we build provide
boring data without surprises, leaving us to concentrate on the surprises from the systems
we test.
Bibliography
Crouch, V. and Sydenham, P., “Relationship Between T&E and M&I,” in Preprints of
Papers, Third Australian Instrumentation and Measurement Conference AIM-TEC 94, The
Institute of Engineers, Australia National Conference Publication No. 94/5, (1994).
Jones, Charles H. and Gardner, Lee S., "Automated Generation of Telemetry Formats," in
Proceedings of the International Telemetering Conference, International Foundation for
Telemetering, San Diego, CA, vol. XXXII (1996). (publication pending)
Samaan, Mouna M. and Cook, Stephen C., "Configuration of Flight Test Telemetry Frame
Formats," in Proceedings of the International Telemetering Conference, International
Foundation for Telemetering, Las Vegas, NV, vol. XXXI, pp. 643-650 (1995).
Schweikhard, William G., "Flight Test - Past Present and Future," in Proceedings 22nd
Annual Symposium, Society of Flight Test Engineers, Lancaster, CA, pp. 5.6-1 to 5.6-12
(August 1991).
THE CHALLENGE OF REENGINEERING IN THE FABRICATION
OF FLIGHT ELECTRONICS
Carl de Silveira
Jet Propulsion Laboratory
California Institute of Technology
4800 Oak Grove Drive
Pasadena, CA 91109
ABSTRACT
KEY WORDS
Projects at JPL in the past have been rather autonomous, consuming resources (people,
facilities, labs, assembly areas etc.) and sometimes reinventing many things as they go
along. This was done for expediency so the project could have command over any item
that might effect the success of the mission. In recent years tools and materials have so
dramatically changed, that projects can vary greatly from one to another in how they do
business. Each project finds itself in a different place with respect to those changes,
depending on the individual investments made by the project to upgrade or inherit what
had been done previously. This can cause difficulty when different sets of CAD tools, data
bases, methodologies, and material, enter the fabrication cycle.
When the time arrives in the project life cycle for the electronic hardware to be fabricated,
many things are occurring simultaneously. For example, drawings must completed, make
or buy strategies finalized, parts lists completed, printed wire boards validated and so on.
The effect of doing many different things at once coupled with their inherent diversity due
to the projects is formidable.
Since multiple projects in various phases of development present a healthy challenge to the
resourcefulness of our industry partners as well as to JPL, we find the time and expense
associated with the lack a unified or even remotely standard interfaces is excessive.
Combine this with a set of tight schedules, and the result in a fabrication area is inability to
perform on those schedules. This can now be changed.
Depending on the total amount and type of fabrication work needed to be done for a
project, interfaces and interaction can vary. There are two extremes. We may form an
entire project fabrication team, consisting of manufacturing and packaging engineers, as
well as task managers for off site fabrication contracts, planers, parts specialists,
assemblers etc. Or we may have, for example on small prototype circuit boards or units,
only the fabrication area planner, interfacing directly with the cognizant project engineer,
scheduling work into the existing infrastructure. Also, anything in between is likely to
exist.
One of the major cost drivers in the fabrication operation, is the information we and our
industry partners have to work with, i.e. the drawings, CAD files, parts directories and
other supporting information. This often varies widely in quality, completeness and
accuracy. The better this is, the less it costs to process the job, no matter where it is built.
The fabrication effort receives a diverse product from multiple design sources using a
varied array of CAD tools. Our inputs are rarely complete, uniform, or homogenous within
a project itself, or completely accurate. They may require, in many cases, abundant work
to complete the end item. This of course is a by-product of the projects tendency to each
use a different tool, maintain their own standard parts files, use non standard processes and
methods of developing a complete design, and to start from scratch each time. It kept us on
our toes though.
We are entering a new era of Space Exploration. Almost gone are the huge SpaceCraft,
glorious as they are, arrayed with cameras and instruments, taking five to ten years to
build, and sometimes even longer to achieve their objectives once launched. Gone too, are
the billion dollar budgets that accompanied them. Enter the age of the smaller, quicker ,
less costly programs. In contrast, the goal for these new era craft is eighteen months to
build and launch on significantly smaller budgets. These smaller SpaceCraft can be
launched on a variety of commercial, military or NASA vehicles. Where possible, their
missions are designed to return data as quickly as possible.
These changes have altered, probably for our lifetime, the way we build and even think
about SpaceCraft, instruments, landers and a host of other related items.
The individual projects can no longer command the resources from start up to launch, as
they once did. The tools, people, and most of the other minutiae associated with a project,
become precious items in the new budgets. In the past, large projects built up a significant
infrastructure that left much heritage. Many things already existed as a result and did not
have to be invented. New projects, or follow-ons, had the full advantage of having this
significant legacy. For example, a large project developed needed but (made them as)
general purpose ASICS, a general purpose flight computer and many other common
elements useful to new projects. Without large projects how will these items be developed
in the future? Cooperation and sharing of resources is the only way. Projects will, and are
currently beginning to operate in this way, with a mind to pool resources for development
of common needed items or solutions to common problems. The organization at the time
was not aligned for any large scale sharing of resources, did not have an institutional plan
for common tools, collocation, or any of the infrastructure necessary to do this.
Currently ongoing at JPL is a reengineering effort. Several key items that are appearing are
the definitions of what processes it takes to build SpaceCraft and other products. This is a
carefully thought out course that defines three major processes and the enabling items that
are required to make these processes work.
THE PROCESSES
Using a process method to define activities in a project life cycle and applying the proper
resources to support those processes, and to provide sharing, have the power to make the
projects maximally efficient. This is the key to making these projects affordable and
buildable in the short time spans necessary. This is not a paper on these processes and how
they work, but we will define them briefly so we can see how they effect and modify the
flight fabrication activities, and most importantly why these processes are key in the
change we have envisioned and begun to implement.
The activities in a project design effort are divided into three physical processes. There is
also an overview management process that these feed into, but for our purpose we will
consider only these three.
These are :
1. Mission System Development:
This constitutes the pre-project phase, early conceptual and early project portion.
2. Design, Build, Assemble and Test:
As stated this is the “making” part of the job.
3. Validate, Integrate, Verify and Operate:
This is the subsystem integration, system test, launch and operations portion.
These three processes physically occupy separate but adjacent areas. They share common
tools. Each process provides collocation and a process infrastructure. The fabrication of
the flight hardware occurs in the Design Build, Assemble and Test process. This process
has at its heart, a physical area called the Design Hub. In this Design Hub reside a core
cadre of specialists used by all the projects. It also contains their CAD tools, LAN
connectivity, common data bases, video conference and virtual collocation capabilities.
From the fabrication point of view, this now provides an ideal situation. Consider that the
packaging and manufacturing engineers are shared by various projects. They live primarily
in the Design Hub, use common tools and are collocated. All share common libraries for
parts, footprints and lead bending information. They use common thermal analysis tool for
the PWBs. Design rules are now standard. They can talk to each other easily and share
common solutions for different projects. They get involved with the design at the very
beginning and follow it through to completion. No more “over the wall” to fabrication.
They are able to display and share any application on screen with anyone else on our
facility or at an industrial partners facility in a video conference environment. This means
fabrication questions can be answered quickly, with all parties looking at or manipulating
the same CAD application. Design changes can be made with feed back from the people
actually doing the work and in real time. Prototype areas doing one of a kind builds, tend
to work closely with the design engineers. Even this breaks down if the access to
drawings, data files, and tools is not smooth and homogeneous.
So what we have here are key driving processes that effect the success of the fabrication
effort whether it is the limited prototypes built at JPL or the production SpaceCraft built at
our industry partner’s facilities.
THE CHALLENGE
The series of changes now occurring at JPL amount to a culture shift. We have discussed
how some of the information products can now come to the fabrication area in better shape
and the ability to better control the changes and mistakes (if any) that are part of these
products. Some areas and processes fit well into the scheme of things and should not be
changed.
An example of an area that should not be changed at JPL, is the Surface Mount
Technology Lab. This Lab does a low volume of one-of-a-kind assemblies for proof of
concept, or provides support our qualification test activities. (Currently we are assembling
ball grid arrays and other LCC devices for near earth and deep space qualification, looking
at survival, rework, cleaning and process control). All of this labs major processes are
automated, or semi-automated, operating under direct computer control. We accept
electronic data files from other areas, on the Lab network.The product flow has been
designed around this automated capability to ensure an efficient end to end process.
Others areas warrant a complete re-look and a change from the ground up. In our case, the
areas that will change first will be driven by the ways the other interacting areas do
business. In the fabrication area the packaging and manufacturing engineers will have
access from their facility to the Design Hub CAD and data libraries. They will have full
connectivity and access to all the data of their counterparts, who live in the Design Hub
itself. This, by it’s nature, will change the way they view and do their present jobs. It
requires a “re-think” on our part, to remake our process to be in harmony with the
institutional processes it must interface with.
The common wisdom requires using the proper automated tools, but doing so in a way that
takes advantage of what these new tools have to offer. This makes your new process
inherently significantly more efficient. You typically want to avoid automating old
processes and receive only a marginal gain. A famous author called this “paving cow
paths”.
Prior to this time, when data came from multiple sources, it was required to reduce the
data used to build the PWBs, produce an actual parts mounting layer, and compare the
actual part information with this layout for every part. Since SMT chip parts often have
large tolerances specified as to their actual size, the various libraries associated with the
different tools used, handled them quite differently. This has the effect of producing pad
sizes that are not appropriate for the actual parts, are not easily detectable, and make these
parts subject to tombstoning (standing up on one end). Fixing this causes a change cycle to
occur, complete with a PWB re-make and time lost. Common part libraries with
appropriate pad sizes, implemented by the manufacturing engineers, eliminate this
problem. Here the process is not just redone or automated, but fundamentally changed by
common tools and data.
The Design Hub has the common data base, its personnel interact directly with the project
designers, and all share common tools and are interconnected. The fabrication area will
operate in a connectivity with the Design Hub and this requires a rethink of several things.
The organizing of the manufacturing, packaging and parts processing people in separate
areas may have worked satisfactorily is a prior era, but the availability of new tools and
processes and the effect of the new interfaces requires a re-look at ways to arrange our
resources for maximum benefit. We notice this “separate area” organization is not peculiar
to JPL but occurs at our industry partners as well.
From the Fabrication Interfaces diagram we can observe what we see as the critical paths
of shared information. See Figure 1., Fabrication Interfaces. This information is of multiple
types, some from CAD, other from the data bases, other from system requirements. The
sources are many, but information must ultimately reside in the common information of
what we refer to as the “Fabrication Knowledge Base”. That collection of things that
ultimately allow hardware to be bought and/or built correctly. From the diagram we have
developed a scenario, that for our operation appears to break down barriers that would
make interaction with the new institutional processes workable. This may have application
elsewhere.
The previous arrangement of the fabrication areas tended to produce activity “silos”
dedicated to a single task. These areas performed well with the previous processes, since
not as much information was available to be dealt with as quickly. However, some areas
by their nature will remain dedicated to a single general activity (like clean rooms,
polymerics areas and PWB testing) but the supporting functional areas will develop a new
approach.
Figure 1
Consider the questions as posed:
What constitutes the optimum workspace set-up for performing the varied
fabrication tasks?
We have benchmarked several similar collocation areas currently in use at JPL, and at a
major industry partner. We have conducted inter views with senior project people, and
have become involved with the existing interactive project areas where common shared
space utilizes multiple disciplines in an interactive and simultaneously independent
environment. With this information we have developed a proposed example of one type of
layout of the Fabrication workspace. Refer to Figure 2. Fabrication Workspace Layout.
Note that small groups can cluster to solve common problems, use the worktables, while
people devoted to other tasks can proceed relatively undisturbed. Many other solutions are
of course possible, the idea is to allow ample table space, gathering areas and still allow a
degree of privacy. We think this provides the process interface with the proper facilities to
make it most efficient. We will adapt and change as we get smarter, and constantly
improve.
Figure 2
CONCLUSION
ABSTRACT
The Midcourse Space Experiment (MSX) program is the premier space technology
experiment of the Ballistic Missile Defense Organization (BMDO) that addresses BMDO
system development requirements. The primary objective of the experiment is to collect
and analyze data on target and backgrounds phenomenology using three multi-spectral
(ultraviolet through infrared) imaging sensors. The program also has objectives for space-
based space-object surveillance, assessing space contamination effects, and investigating
atmospheric and space phenomenology. Effective scientific Data Management is one of
the critical functions within the MSX program organization and is key to meeting the
program objectives. The wide spectrum of objectives and requirements of the MSX
program were major drivers in the design of a Data System with a heterogeneous,
distributed processing center concept and a dual data flow path to meet sensor assessment
and experiment analysis requirements. An important technology decision that evolved from
this design was the exclusive use of workstation class computers for data processing. A
flexible, highly robust development and testing methodology was created to implement this
unique system. Companion papers in this session provide detailed descriptions of functions
of key elements in the Data System operations.
Introduction
The Midcourse Space Experiment (MSX) is a multi-year space demonstration and
validation experiment sponsored by the Ballistic Missile Defense Organization (BMDO)
[formerly the Strategic Defense Initiative Organization (SDIO)]. The MSX program was
originally created to support the development of Midcourse Sensors (MCS) for the BMDO
Space Defense System (SDS) architecture with the cooperation of the Air Force Space and
Missile Systems Center (AFSMC) and U.S. Army Space and Strategic Defense Command
(USASSDC). The MSX System-Derived Requirements Document (SRD) and Science
Modeling Requirements Document (SMRD) provide detailed descriptions of the program
objectives and requirements. A summary of the MSX mission objectives includes:
The primary sensor is the SPIRIT III instrument, which is a cryogenically cooled IR
radiometer and interferometer employing the latest infrared sensor technology covering
wavelengths from Midwave IR (MWIR) to Very Long Wavelength IR (VLWIR). MSX
has successfully demonstrated the ability to build, integrate, and operate extended duration
space-based IR and space surveillance sensors. The SPIRIT III sensor specifically
demonstrates IR focal plane technology being developed by BMDO for eventual
operational systems, and the design of re-imaging optics capable of rejecting bright, near-
off-axis sources.
The primary vendors involved in the hardware development of the system included Johns
Hopkins University, Applied Physics Laboratory (JHU/APL) for overall satellite platform
design, development, integration and operation, Utah State University/Space Dynamics
Laboratory for delivering the SPIRIT III sensor, JHU/APL for delivering the UVISI and
Contamination sensors, and MIT/Lincoln Laboratory for delivering the SBV sensor. Each
of these major organizations was supported by many other sub-contractors for the
development of the hardware systems.
MSX Experiment Categories
To address the BMDO requirements and meet the MSX mission objectives, the MSX
mission includes several experimental measurement programs. A single Principal
Investigator (PI) is assigned to lead a team of experts to design these experiments.
Target Measurements The targets experiments demonstrate the ability to observe and
track midcourse objects against realistic backgrounds at system-representative ranges,
geometries, resolutions, and frame rates, using existing LWIR technology with legacy to
future operational sensors. Ultraviolet and visible data collection and sensor demonstration
also support target modeling at these wavelengths.
Earthlimb, Auroral, and Terrestrial Background Measurements The systems used for
detection and tracking of targets must be able to discern the targets as seen against
naturally occurring backgrounds. These natural backgrounds are a source of additional
photon noise and clutter that can degrade the ability of the detection systems to find a
target in the background. The design, development and testing of detection and tracking
systems must include an assessment of the limitations of Earthlimb, auroral, and terrestrial
backgrounds. MSX is providing a statistically complete backgrounds data base for direct
use in ground demonstrations and for model development and validation.
Onboard Signal and Data Processing The Onboard Signal and Data Processor (OSDP)
demonstrates real-time signal and data processing of multi-color LWIR sensor data
onboard the MSX spacecraft in system representative scenarios using flight qualified,
radiation hardened processors. The OSDP system performs time-dependent and object-
dependent signal processing of data from the SPIRIT III radiometer to identify targets and
target tracks.
The major components of the SPIRIT III LWIR instrumentation package consist of: an
extremely high off-axis-rejection, cryogenically cooled telescope that has mirror scan
capabilities to provide a 1x6 degree field-of-regard; a cryogenically cooled, 5-color, high-
spatial-resolution, radiometer system covering wavelengths of 6 to 26 microns (µm); a
cryogenically cooled, 6-channel, high-spectral-resolution interferometric spectrometer
covering wavelengths of 4 to 28 µm; and a long-life, solid-hydrogen cryogenic dewar/heat
exchanger.
The Ultraviolet/Visible Imagers and Spectrographic Imagers (UVISI) are a suite of five
spectrographic imagers and four two-dimensional imagers. These sensors have spectral
and imaging capabilities covering wavelengths from the far ultraviolet (~ 0.1 µm) to the
near infrared (~ 0.9 µm). The spectrographic imagers (SPIMs) provide spectral coverage
from the vacuum ultraviolet (~ 0.1 µm) to the near infrared (~ 0.9 µm) with high spectral
resolution (~ 0.001 µm) to allow the spectral decomposition of point sources such as
unresolved plumes and re-entry vehicles at long ranges. The UVISI imagers support
measurement objectives that require a narrow field of view (NFOV) and a wide field of
view (WFOV). One NFOV/WFOV imager pair operates in the ultraviolet and one in the
visible, with selectable filters to provide limited bandwidth images. The NFOV imagers
have a field of view (FOV) and pixel size similar to the field-of-regard and detector size of
the SPIRIT III radiometer. UVISI instruments can operate in a number of modes to
optimally observe diverse phenomena.
Space-Based Visible Sensor
The Space-based Visible (SBV) payload integrates advanced visible band sensor
technologies into the design, fabrication, and testing of a spaceborne imaging system for
space surveillance. The key imaging components of the SBV sensor include a 6 inch off-
axis re-imager design telescope and a 4 element frame transfer charge-coupled device
(CCD) focal plane, focal plane electronics and on-board signal processing system to
perform on-board object selection.
Contamination Sensors
Limiting spacecraft contamination and cleanliness during spacecraft design and integration
were major objectives of the MSX program. The MSX Contamination Experiment (CE) is
made up of five instruments that directly measure the neutral, ion and particulate
environment around the MSX satellite. The MSX mass spectrometer contains two sensors
which record ion and neutral gas densities ranging in mass from 2 to 150 amu. The cold-
cathode pressure gauge records ambient pressure from 10-10 to 10-5 Torr. There are four
Temperature-Controlled Quartz Crystal Microbalances (TQCM) and one Cryogenically-
cooled Quartz Crystal Microbalance (CQCM) that measure the adsorption or desorption of
molecular contamination on the satellite and SPIRIT III mirror surfaces. A 500 Watt short
arc xenon lamp utilizes the UVISI WFOV visible imager to detect particles in the near
field environment of MSX. A krypton lamp experiment uses the UVISI spectrographic
imager to measure the water vapor concentration near the MSX satellite.
All the MSX program objectives require collection and analysis of data, so effective Data
Management is necessarily one of the critical functional elements in the program. The Data
Manager has been a key member of the MSX program organization from the very
beginning. The MSX Data Management responsibility is to enhance the process of
transforming the data from collection and acquisition, through processing and analysis, into
information and knowledge. The value of the results obtained from the analysis of the data,
and the legacy created for applications to future missions, will determine the true success
of the MSX program. The goal of MSX Data Management is to design and implement a
Data System that will actively support this results-oriented philosophy.
The MSX Data System was built using a requirements-driven methodology. A key
responsibility of the Data Management Team was to identify and understand all the
requirements that are imposed on the Data System. The Data System design is based
Systems Science principally on the requirements of the
Derived Modeling
Requirements Requirements BMDO, the MSX Program Manager, the
Document Document
PIs, and the sensor developers. However,
this Data System design considers all
PI
potential users of the MSX data. These
DCATT
Data Experiment
Certification
Requirements
Management
and
Data Analysis users include other DOD organizations
Plans
Sensor
Requirements largest potential constituency of the MSX
Vendor
Instrument program.
Team
Requirements
The MSX Systems-Derived Requirements Document (SRD) and the Science Modeling
Requirements Document (SMRD) state the detailed requirements that form the foundation
of the design and development of the MSX program. These documents describe the
questions the MSX program must address and includes considerable detail on the specific
experiments and types of experimental data needed. The PIs are responsible for the
analysis and validation of the MSX data. The PI Experiment Plans and Data Analysis
Plans specify the approaches used, which, in turn, drive the detailed design of the Data
System.
A unique aspect of the MSX program is the creation of a Principal Investigator position for
the certification of the MSX data and assurance that technology transfer objectives are
met. Certification requires that the techniques and algorithms used to process and calibrate
the data have been reviewed for scientific and technical soundness and are approved by
the Program for use by all data users. The Data Certification and Technology Transfer
(DCATT) Team plays a major role interacting with Data Management, concurring in Data
System requirements, and certifying key parts of the final Data System processing
software.
The procedures used in both pre-launch and post-launch sensor calibration and data
conversions must be certified by the DCATT Team. A DCATT representative is
specifically assigned to work with the sensor vendors Performance Assessment Team
(PAT) to oversee the calibration process, developing conversion and calibration
algorithms, monitoring and assessing the sensor performance, determining the calibration
factors, and providing the software packages for implementing them. These software
packages are essential products of the Data System.
The baseline program requirements were captured and documented in a Preliminary Data
Management Plan (DMP) approved and published 15 July 1991. The DMP provided the
basic road map for the development of the Data System. After the Preliminary DMP was
released there were many changes in National Security needs, organizational structure, and
system development requirements that influenced the final Data System design and
development. But the development process had to remain flexible and responsive to the
changes and continuing evolution that are inherent in any experimental program. The basic
principles, policies, and design concepts defined in the Preliminary DMP still served in
meeting the demands of these changes.
The MSX program requirements for rapid verification processing of large volumes of data
(~12 giga-bytes per day) and continuous sensor characterization by the sensor developers
and the functional demonstration objectives for new systems technology led to the design
of a distributed data processing system. This approach differs from many other large
experimental programs that create a central facility to process and verify the experimental
data. Also, the unique roles assigned to the sensor developers in the data verification
process and the DCATT to oversee the certification of the process were important factors
that needed to be address in the final design and development process.
The MSX Data System architecture provides a parallel flow of data to meet the program
requirements. In past programs, the sequential data processing and verification functions
often resulted in delayed data access. For the MSX program, the data received at the
telemetry ground site are distributed simultaneously in a concurrent, parallel flow to the
sensor vendor Data Processing Centers (DPCs) for verification and the BMDO Science
and Technology Information (STINFO) Centers for distribution to the PI Data Analysis
Centers (DACs) for analysis.
The rationale behind this distributed and
parallel data flow design is driven by
MCC MPC
several objectives. First, the critical
technology demonstration objectives of the
MSX program require careful verification
BMDO
Sensor of the sensor data. The sensor vendors are
Data
STINFO Verification
Updates Processing
the most knowledgeable resource for
Centers monitoring and assessing the sensor
Centers
Phenomenology
Models and Codes performance on orbit and verifying the
quality of the data. The MSX program
PI Data Analysis Centers assigned them the responsibility for the
continual monitoring of sensor performance
Figure 3 Top Level Representation of the MSX Data System throughout its operational lifetime.
illustrating the parallel flow and distributed data processing
concept developed for the MSX Data System.
The MSX distributed data processing
concept is a logical extension of these
responsibilities. This design concept puts the sensor vendors in one data flow path to
verify the sensor data and to produce the data products and data bases that are needed to
assess the sensor performance. The DPC created at each sensor developers facility was
designed to process, on average, a day of experimental data within 24 hours and distribute
the verification and sensor performance results. These products are returned to the
STINFO Centers for archive and distribution.
The second objective is to expedite data access to the PI teams for analysis and
interpretation. This supports rapid assessment of MSX observations in view of urgent
system design issues and opportunities for re-planning and re-execution of experiments.
The second data flow path of the architecture sends the data to the BMDO STINFO
Centers that are a single distribution center for rapid distribution of the data to the PI
DACs and other the MSX community users.
The third objective is to allow the PIs, each with unique requirements and methodologies
for performing data analysis and validation, the opportunity to develop a customized DAC
that can expand on existing capabilities and the expertise already available in the research
team created.
Data System Development Methodology
With so many organizations involved in the Data System development, it was important
that the MSX Program Manager and the Data Manager have a clear view of the
development process to ensure that requirements were met and appropriate trade-offs
could be made when constraints on resources bounded what could be achieved and which
requirements could be satisfied. There were many challenges to creating this methodology.
It had to be acceptable to several different organizations that already had their own
software development structures. It had to be disciplined and detailed enough to provide
confidence that requirements were being met
A major component of the Data System is the DCATT certified sensor calibration and data
verification software used by the DPCs to process the data and to produce various data
products and data bases. To meet the DCATT certification requirement, each DPC was
required to have a formal configuration management (CM) and quality assurance (QA)
program for all launch critical and mission critical verification and sensor calibration
software.
A major issue in the system wide development methodology was merging individual
design decisions of each Center within a common set of technology standards and
capabilities. The standards needed to be flexible enough to allow each Center to work
within the constraints of its sponsoring organization and remain compatible with existing
systems. But they also needed to provide the basis for compatibility across the MSX Data
System for interoperability and extensibility to exploit new technology. Several key
technology decisions made early in the program have served the program well into 1996.
The first standard gave each Center the flexibility to choose the specific hardware vendors
and the capabilities of their systems based on performance requirements and resources.
Although potentially a risky decision at the time of implementation, the growth in Open
Systems technologies and workstation capabilities has proven it to be an important plus for
the MSX program. The last concept is a cornerstone of the entire Data System
architecture. The impact is felt throughout the entire program and is critical for
implementing the DCATT concepts of verification and calibration certification.
The Data System provides data distribution channels and processing software to take the
data from the raw telemetry (Level 0) through verification of the digitally compatible data
(Level 1) and to apply the appropriate calibration algorithms and coefficients to produce
calibrated engineering data (Level 2). With five sensor system DPCs, two BMDO
STINFO Centers supporting MSX, five PI DACs, plus other institutions receiving or
distributing MSX data routinely, there are over 30 different interfaces.
Because of the highly distributed nature of the MSX Data System, and the asynchronous
development schedules of the different Centers, MSX had to develop a unique approach to
system level testing of the MSX Data System. Due to limited resources, the testing had to
be coordinated with and build on tests being performed by other functions in the overall
program development. Many of the tests were developed around spacecraft integration
tests. The Data System Test (DST) program was not a single, formal test event, but rather
a test process that included selecting other scheduled tests, monitoring their execution, and
using their results to satisfy the DST objectives.
There were also many functions and products that could only be tested until after launch
with operational data flows. These functions and products had to be identified for post
launch testing and confirmation of performance.
System Acceptance Reviews (SARs) were the last set of reviews coordinated by the MSX
Data Manager to assess the readiness of the entire MSX Data System. The DSTs were
usually not performed under operational constraints of timelines and manpower, or in an
environment of continual data flow, so the DSTs were not a totally effective test of the
capability of each of the component Centers.
The challenge of creating the SAR process for the MSX Program’s unique system
structure was to provide enough visibility into the final operational capability of the
component Centers within the constraints of time and the resources available to each
center. The resulting reviews consisted of a blend of formal presentations on requirements
status, formal documentation reviews and audits of hardware and software, and
demonstrations of a “day in the life” of a center.
The companion papers in this session provide specifics on the important aspects of the on-
board data collection, ground acquisition, and verification processing that are critical to
experiment validation. The JHU/APL had primary responsibility for the integration of the
satellite and sensors, the development of the ground station for spacecraft operations and
telemetry processing, and operation of the spacecraft and ground station. The design and
operation of the spacecraft recording and telemetry systems have a major impact on the
overall Data System design and operation. The paper by Deboy et al. describes key
components and functionality of the spacecraft command and data handling system and the
telemetry acquisition systems at JHU/APL. The paper by Good et al. discusses the unique
operations planning and scheduling of spacecraft experiments created for the MSX
program and the operations concepts for recording and downlinking of data. The paper by
Harvey and Baer describes the daily operational control of the spacecraft at the Mission
Operations Center at APL, the tools developed to use spacecraft telemetry to monitor
operations, and the ground processing systems created to process and distribute the sensor
and ancillary spacecraft data. The last three papers present the unique capabilities
developed at the sensor DPCs to process and verify the sensor data and to support
experiment planning and spacecraft operations.
References
Mill, J.D., R.R. O’Neil, S. Price, G.J. Romick, O.M. Uy, E.M. Gaposchkin, G.C. Light,
W.W. Moore, T.L. Murdock, A.T. Stair, “Midcourse Space Experiment: Introduction to
the Spacecraft, Instruments, and Science Objectives,” Journal of Spacecraft and Rockets,
Vol 31, No 5, Pages 900-907, 1994.
Midcourse Space Experiment Spacecraft and Ground Segment
Telemetry Design and Implementation
Abstract
This paper reviews the performance requirements that provided the baseline for
development of the onboard data system, RF transmission system, and ground segment
receiving system of the Midcourse Space Experiment (MSX) spacecraft. The onboard
Command and Data Handling (C&DH) System was designed to support the high data
outputs of the three imaging sensor systems onboard the spacecraft and the requirement for
large volumes of data storage. Because of the high data rates, it was necessary to construct
a dedicated X-band ground receiver system at The Johns Hopkins University Applied
Physics Laboratory (APL) and implement a tape recorder system for recording and
downlinking sensor and spacecraft data. The system uses two onboard tape recorders to
provide redundancy and backup capabilities. The storage capability of each tape recorder
is 54 gigabits. The MSX C&DH System can record data at 25 Mbps or 5 Mbps. To meet
the redundancy requirements of the high-priority experiments, the data can also be
recorded in parallel on both tape recorders. To provide longer onboard recording, the data
can also be recorded serially on the two recorders. The reproduce (playback) mode is at
25 Mbps. A unique requirement of the C&DH System is to multiplex and commutate the
different output rates of the sensors and housekeeping signals into a common data stream
for recording. The system also supports 1-Mbps real-time sensor data and 16-kbps real-
time housekeeping data transmission to the dedicated ground site and through the U.S. Air
Force Satellite Control Network ground stations. The primary ground receiving site for the
telemetry is the MSX Tracking System (MTS) at APL. A dedicated 10-m X-band antenna
is used to track the satellite during overhead passes and acquire the 25-Mbps telemetry
downlinks, along with the 1-Mbps and 16-kbps real-time transmissions. This paper
discusses some of the key technology trade-offs that were made in the design of the system
to meet requirements for reliability, performance, and development schedule. It also
presents some of the lessons learned during development and the impact these lessons will
have on development of future systems.
Keywords
The MSX mission is sponsored by the Ballistic Missile Defense Organization and is an
experimental data gathering satellite. It is meant to provide the data required to design
future space-based missile defense systems. The primary mission is to gather ballistic
missile target signatures during the midcourse phase of flight (Figure 1). Secondarily, the
spacecraft will gather general space background data to enhance the understanding of the
clutter environments that ballistic missiles will have to be tracked through. The
background measurements are statistical and must be characterized over a long period that
will include seasonal changes. Therefore, the lifetime of the MSX is designed for 5 years,
with the cryogenically cooled Spatial Infrared Imaging Telescope (SPIRIT III) instrument
limited to 15 months by its cryogen supply.
The MSX carries 12 optical sensors. SPIRIT III spans the wavelength region of 2.6 to 28
µm using a 5-band radiometer and a 6-band interferometer. The Ultraviolet and Visible
Imagers and Spectrographic Imagers (UVISI) span the wavelength region of 110 to
900 nm. This instrument consists of four ultraviolet and visible, wide and narrow field of
view imagers and five spectrographic imagers. The Spaced Based Visible (SBV)
instrument is a single wideband visible-band sensor that covers the wavelength range of
300 to 1000 nm. The data from all of these sensors must be gathered simultaneously.
Therefore all sensors are coaligned and pointed by the spacecraft motion. Figure 2 shows
the spacecraft configuration with all the sensors mounted on the top end and all pointing in
the +X direction.
There is also a mission requirement to gather raw focal plane array (FPA) data in addition
to any processed images. These data will allow future image algorithm development to
continue on the ground with the benefit of real raw space data for algorithm testing.
Finally, the MSX orbit was chosen to obtain target signals in realistic clutter backgrounds,
which is essential to the MSX mission goals. The MSX orbit is a near polar, near Sun
synchronous at an altitude of 900 km and an inclination of 99.2°. Launch is on a Delta-II
vehicle out of Vandenberg Air Force Base.
Future operational space defense systems will surely not require the complete spectrum
used in MSX. However, MSX is meant to provide the data to better assess what future
systems will require.
A requirement for a science telemetry rate of 25 Mbps is derived from the large number of
imaging sensors combined with the requirements to simultaneously gather raw FPA data
from all sensors. The instruments produce image-processed data and compressed data as
well as raw data. The Command and Data Handling (C&DH) System collects all these
data for recording. The more routine background experiments do not require the high
temporal rates and therefore can implement a 5-Mbps science telemetry gathering mode.
Figure 2. Orbital configuration of the MSX spacecraft. (MLI, multilayer insulation;
WFOV, wide field of view; NFOV, narrow field of view; SPIRIT III, Spatial
Infrared Imaging Telescope III; UVISI, Ultraviolet and Visible Imagers and
Spectrographic Imagers; SBV, Space-Based Visible; RF, radio frequency; OSDP,
Onboard Signal and Data Processor; TT&C, telemetry, tracking, and control.)
The lower rates will allow longer-duration experiments or a larger number of experiments
before tape recorder playback is required.
The three main instruments have special experiments which utilize only that particular
instrument. To take advantage of that fact, special 5-Mbps formats have been implemented
that focus most of the data bandwidth to that instrument. Figure 3 shows the instrument
bandwidth allocations for the prime science 5- and 25-Mbps modes.
The tape recorders had to be sized to collect prime science telemetry at the maximum data
rate of 25 Mbps for the full duration of a ballistic missile target track experiment. The
duration of a target flight is about 30 min, and adding some overhead for pre- and
postevent star calibrations resulted in a 36-min (or 54-Gb capacity) recording requirement.
The 5-Mbps modes will allow five times that duration, or 3 hr, before a tape is full. The
playback to the ground station is at 25 Mbps, independent of recording rate, to minimize
downlink time.
MSX Spacecraft Description
The MSX spacecraft stands about 17 ft tall and about 5 ft square, not including the solar
panels, as shown in Figure 4. It weighs about 6000 lb. The solar arrays provide 1200 W;
with maximum eclipses they provide about a 900-W orbit average power capability. A 50-
Ahr battery enables experiments to use up to 2.5 kW and allows experiments to be
performed without the benefit of solar input for
eclipses of up to 30 min. As shown in Figure 2,
the spacecraft is divided into three sections: the
instrument section (IS), the truss structure, and
the electronics section (ES). The IS (at the top)
carries all of the instruments on the IS cube
wrapped around the SPIRIT III instrument.
The MSX Data Handling System (DHS) gathers, formats, and outputs real-time science
and housekeeping data to generate three output data streams: (1) a 25-Mbps (or 5-Mbps)
prime science data stream containing imager, processor, and housekeeping data, which is
transmitted in real time or stored on the spacecraft tape recorder and downlinked over the
X-band link; (2) a 1-Mbps wideband downlink data stream containing snapshot imager
data and housekeeping data, which is transmitted in real time over the S-band link; and (3)
a 16-kbps narrowband downlink data stream containing spacecraft housekeeping data
and/or processor memory dump data, which is transmitted in real time over the S-band
link. The DHS provides four selectable prime science data formats (one 25-Mbps format
and three 5-Mbps formats), three selectable wideband data formats, and ten selectable
narrowband data formats. The prime science, wideband, and narrowband data formatters
are independently controlled, and all three can be operated simultaneously.
The MSX DHS performs several critical spacecraft functions, including maintaining and
distributing mission elapsed time (MET) and universal time (UT), recording and
downlinking selected critical spacecraft housekeeping data, and providing spacecraft fault
protection and autonomous control via a dedicated link to the MSX command system. The
DHS (Figure 5) consists of the data formatters with associated interface electronics, a
housekeeping data gathering subsystem, clock generation and timing chain electronics, and
a microcomputer to process commands and keep mission time.
The prime science formatter processes real-time, high-rate data from the SPIRIT III
instrument, the UVISI instrument, and the SBV instrument. It also processes data
previously transferred to and stored in the DHS from the attitude and tracking processors,
contamination experiments, Onboard Signal and Data Processor (OSDP), and
interferometer as well as data (sync word, frame count, etc.) which originated within the
DHS. The formatter multiplexes the data from the various data sources into a single data
stream at the output bit rate in the selected science output format (see 25-Mbps format,
Figure 6).
Frame markers, read-out gate signals, and clocks used to gather and format the high-speed
data are generated in the data system and distributed to the data sources. Variations in the
cable, driver, and receiver delays, as well as variations in the data source gate propagation
delays in the processing of these signals, preclude formation of a contiguous output data
stream. The prime science formatter senses the total data gathering delay for each high-
Figure 5. The MSX Data Handling System.
Figure 6. MSX’s X-band prime science minor frame data format (high-rate mode).
speed data source and uses the results to control series programmable delay elements to
equalize the delays from all sources and permit formation of a contiguous data output (U.S.
Patent No. 5,379,299).
Prime science format is selected when a command message is received via the MSX
command processor and the DHS microcomputer electronics. In each prime science
format, a minor frame (Figure 6) contains information from each of the imagers,
experiments, and onboard processors in addition to a partial housekeeping data set. Each
major frame of data consists of 360 minor data frames and contains one complete
housekeeping record. Format and rate selection transitions occur only at major frame
boundaries upon generation of a major frame pulse. The prime science formatter
determines the output data rate (25 Mbps or 5 Mbps) from the format selection command.
The prime science formatter contains a dedicated housekeeping first-in first-out (FIFO)
memory, which is loaded with a new, complete housekeeping data set every second by the
DHS housekeeping data-gathering subsystem and is read out by the formatter electronics.
The housekeeping data memory consists of ping-pong FIFOs, with the housekeeping data-
gathering subsystem loading one FIFO while the formatter reads the data previously
loaded into the alternate FIFO. Each (formatter) major frame pulse reverses the FIFO
(load, readout) selection, with the result that all MSX downlink data frames contain the
complete housekeeping image gathered during the second prior to the major frame pulse.
Figure 7. The prime science data formatter. (ROG, read-out gate; FIFO,
first in first out.)
A hardware frame counter keeps the minor frame count and inserts it into the downlink
data stream each minor frame.
In addition to real-time imager data, the prime science formatter processes low-rate data
from nine data sources including the onboard tracking and attitude processors, the
contamination experiments, the OSDP electronics, and the interferometer instrument. Since
the bit allocation for each source is relatively small, data are transferred between the low-
rate sources and the prime science formatter using a low-speed RS 422 interface with a
resistor–capacitor termination network. This interface permits the low-rate instruments to
be designed without the high-speed, high-power, negative-voltage interface electronics
required for imager data transfers. The prime science formatter distributes major frame and
half minor frame markers to each low-rate data source, with data transfers consisting of
data-source-generated clock, data, and enable signals occurring during the second half of
each minor frame. The transferred data are loaded into dedicated FIFO memory in the
DHS prime science formatter using the data-source-generated transfer clock (f ² 1 Mbps).
They are then read out from the FIFO memory by the prime science formatter at the output
bit rate (25 Mbps or 5 Mbps) during the appropriate data window in the first half of the
next minor frame. Radiation-hardened high-speed “serial in-serial out” FIFO memories and
output level shifters are used for this application.
Unlike the interfaces with the low-rate data sources, output data from the three onboard
imagers is not buffered, but is read out in real time and placed directly into the appropriate
downlink data stream window. Data transfers from the imagers to the DHS prime science
formatter are controlled by the formatter, which generates frame pulses, read-out gates,
and read-out clock signals to each of the imagers. Output data are (internally) loaded by
each imager data source on receipt of a minor frame pulse and are read out serially by the
prime science formatter. The imager output electronics shifts output data bits on the
negative going edge of the clock from the prime science formatter while data bits are
latched in the formatter electronics on the rising clock edge. Read-out gate (ROG)
transitions occur during the negative phase of the data transfer clock.
Prime science data transfer electronics for all signals between the prime science formatter
and the imager electronics consists of an MC10501 emitter-coupled logic (10K ECL)
driver element and an MC10515 (10K ECL) receiver element connected by 77 ½ twinax
cable. Resistors and diodes are included in the interface design to, among other things,
match the cable impedance and provide for a proper voltage at the receiver when the driver
is unpowered due to DHS redundancy and/or imager operational state.
Distances between the MSX DHS formatter electronics and the imager output electronics
range from about 3 to 15 ft. Expected cable propagation delays as well as electronics
delays for data transfers between the imagers and the prime science formatter are
summarized in Table 1. The total path delay represents the time between a selected (ROG
active) imager shift clock edge and the appearance of valid (shifted) data at the input to the
DHS prime science formatter reclock latch. The difference between minimum and
maximum electronics delays results primarily from fabrication process effects (that is, part-
to-part variations) and temperature effects, while the cable propagation uncertainty results
primarily from the package location and cable routing.
The maximum total path delay for data from each of the three imagers (113.4 ns) and the
uncertainty in the path delay (113.4 – 33.3 ns, or 80.1 ns) preclude formation of a
contiguous 25-Mbps output data stream. Shifting serial imager data on one clock edge and
Table 1. Prime Science Formatter Path Delays.
Minimum Delay (ns) Maximum Delay (ns)
Central data system clock output drive 1.0 3.3
3 to 15 ft twinax cable 4.9 24.3
Instrument total delay—(Clock ↓ to Data Valid) 12.0 35.0*
3- to 15-ft twinax cable 4.9 24.3
Central data system data receiver 1.0 3.7
Programmable delay unit inherent delay 8.0** 15.0**
Central data system data multiplexer 1.5 5.3
Central data system latch setup time 0.0 2.5
Total path delay 33.3 113.4
*Specified
**25 Mbps only
reading the data into the prime science formatter on the following clock edge would
require a maximum data valid path delay of one-half bit period (20 ns) from each imager,
while the actual delay from any imager data source can be anywhere from 33.3 to
113.4 ns. A delay compensation mechanism to equalize the delays for all prime science
formatter data sources was introduced to overcome this problem and permit formation of
the 25-Mbps output.
The MSX satellite uses two separate communications links to achieve mission
requirements. An S-band up- and downlink is used for commanding the spacecraft and
retrieving the housekeeping telemetry and compressed science data, while the higher-rate
prime science data are downlinked at X-band. The complete flight system is shown in
Figure 8.
MSX uses a standard Air Force Space-to-Ground Link System (SGLS) transponder for
command, control, and telemetry. This allows the operations team to utilize assets in the
Air Force Satellite Control Network (AFSCN) for communications with MSX, greatly
increasing the number of ground contacts per day.
direct each transponder’s signals to antennas on either side of the spacecraft. With the
coaxial switches and the antennas’ hemispherical radiation patterns, complete up- and
downlink coverage over the entire sphere of the spacecraft is maintained, even should one
of the transponders fail. During normal operation, however, the use of the switches is
minimized, and each transponder transmits to separate sides of the spacecraft. The proper
transponder is selected depending on MSX’s orientation relative to the ground station.
The command uplink is standard SGLS, with the command tones phase modulated onto
the carrier at 1 rad; the ranging signal, when used, is directly phase modulated onto the
carrier at 0.3 rad. The only difference between MSX and most other SGLS satellites is that
MSX does not use S (space) tones. Command data are output from the transponder
receiver to the C&DH System only when the 0 and 1 tones are detected, and a short
header is used to guarantee that no bits are dropped.
The transponders’ receivers will lock to an uplink signal as low as –112 dBm. (The
receivers actually lock to the signal at a much weaker power level, but for bit error rate
considerations do not decode commands until this higher threshold is reached.) With the
typical AFSCN ground stations and with the APL MSX Tracking Station, the uplink
power delivered to the transponder receiver will be in the –75 to –85 dBm range.
The nature of the MSX mission resulted in a requirement for a large prime science data
rate. A downlink at X-band with a data rate of 25 Mbps would easily meet the
requirements for the mission, providing nearly 100 Gb of downlink data daily (60 min of
active downlink per day accumulated over six station contacts).
It was clear early in the program that a pointed (gimbaled) antenna would be required to
achieve the link margin requirements and keep the antennas both on the ground and on the
spacecraft of reasonable size. As Figure 9 shows, the X-band antennas on MSX are 8-in.
parabolic dishes, fed with backfire monofilar helix feed elements and yielding a gain of
approximately +22 dBic. The antennas are mounted side by side (each is connected to one
of the X-band transmitters) to an arm driven by redundant gimbal drives. Their size results
in a 1-dB beamwidth of ±3°, well within the pointing capability of MSX’s attitude system.
Figure 9. X-band antennas/gimbal system under test at
Vandenberg Air Force Base just prior to launch.
For the transmitter, APL designed and fabricated a 5-W (+37-dBm) quadrature phase shift
keying (QPSK) solid-state transmitter at 8.475 GHz. The unit accepts differential data and
clock inputs from the C&DH System and formats it for the QPSK downlink.
The MSX Tracking Station (MTS) at APL (Figure 10) is the primary tracking station for
the MSX spacecraft, and it is colocated with the Mission Control Center (MCC). The
station consists of a primary 10-m dish, capable of transmitting at L-band and tracking and
receiving at both S- and X-band, and a backup 5-m dish with S- and X-band receive
capability.
Figure 10. The MSX Tracking System at APL. (MSX) mission.
L-Band Transmit
With a 2-kW klystron and 42-dB gain from the 10-m dish, the MTS transmits an EIRP of
over 30 MW, yielding –65 dBm into MSX’s command receiver. Command data words are
transmitted to the MTS from the MCC and are used to develop the command tones that
modulate the uplink carrier.
S-Band Receive
Although the MTS is configured specifically for the reception of MSX transmissions, it
shares some similarities with AFSCN ground stations. The station receives and
demodulates the 1-Mbps compressed science data and the 16-kbps subcarrier telemetry,
and it ships both of these products to the MCC. However, the MTS has no ranging
capability; the satellite is tracked with the AFSCN ground stations.
X-Band Receive
The AFSCN ground stations lack an X-band receive capability, so the MTS is the sole
ground station for downlinking the prime science data. Sufficient margin exists in the X-
band link to permit reception of data down to 5° antenna elevation, when multipath effects
begin to degrade performance. This low elevation angle has the desirable effect of
providing a high percentage of pass time during which the prime science can be
downlinked. Given the size of the data requirements (100 Gb/day), this eases operations.
5-m Backup Dish
The 5-m backup antenna can receive both the S-band and X-band downlink signals. The
antenna’s smaller size places tighter limits on elevation angle thresholds, so typically at S-
band the subcarrier-only mode (no 1 Mbps) is used until the main 10-m dish is operational.
There is no significant impact on X-band operations. Track capability exists only at S-
band, but the accuracy of the track keeps the satellite well within the antenna’s X-band
beamwidth.
MIDCOURSE SPACE EXPERIMENT:
SPACECRAFT OPERATIONS PLANNING AND EXECUTION
ABSTRACT
The constraints of the Midcourse Space Experiment (MSX) spacecraft which affect
thermal and power management, finite onboard recording capabilities, and limited
downlink opportunities establish significant bounds under which spacecraft operations and
telemetering systems must operate. This paper reviews the MSX mission and data
collection planning processes, commanding and execution procedures, data telemetering
processes, and the overall impact of spacecraft constraints and downlink nodes to data
collection and downlink activities.
INTRODUCTION
The Midcourse Space Experiment (MSX) is a mission sponsored by the Ballistic Missile
Defense Organization (BMDO). A primary purpose of the MSX mission is to collect and
analyze target and background phenomenology data to address BMDO midcourse sensor
requirements. This data will be collected using fully characterized and calibrated onboard
sensors which perform optical measurements from the far ultraviolet to the very longwave
infrared wavelengths. The MSX mission consists of an interleaved set of experiments
using MSX sensors and other supporting sensors. MSX will be launched into a near-polar,
900 km, nearly sun-synchronous orbit from Vandenberg Air Force Base. The overall
mission is planned for a four to five year lifetime following launch. The portion of the
mission which collects longwave infrared data is expected to consist of the first one to two
years since the infrared sensor as a shorter lifetime than the rest of the spacecraft and
sensors.
The MSX spacecraft, Figure 1, consists of the spacecraft structure and various support
subsystems which provide power, thermal control, command and data handling, RF
communication, target tracking, and attitude determination and control. The axes of the
optical sensors [space infrared imaging telescope (SPIRIT III), ultraviolet and visible
imagers and spectrographic imagers (UVISI), and space-based visible (SBV)] are parallel
to one another and point along the spacecraft's +X axis; therefore, pointing any of the
optical sensors is accomplished by maneuvering the spacecraft's +X axis. SPIRIT III is a
passive mid- to very long-wavelength infrared sensor consisting of a telescope, a six-
channel interferometer, a six-band radiometer, and a cryogenic Dewar and heat exchanger.
UVISI consists of four imagers and five spectrographic imagers covering a spectral range
from the far UV to the near infrared. UVISI also includes an image-processing system for
use in tracking targets and aurora. SBV is a visible off-axis,
all-reflective reimaging
telescope with a thermo- Xenon lamp UVISI WFOV and
NFOV imager - visible
Krypton lamp
electrically cooled CCD focal- Mass spectrometer UVISI spectrographic imagers (5)
Four information channels control spacecraft configuration and recover science data. An
uplink channel sends command messages at 2 kbps. Downlink channels return state-of-
health data, wideband science data, and prime science data at 16 kbps, 1 Mbps, and 25
Mbps, respectively. Depending upon experiment requirements, the onboard prime science
recording rate is selectable at either 5 Mbps or 25 Mbps using a variety of data formats.
The MSX flight operations system has been segmented into three functional areas:
planning, control, and assessment. The operations planning system is responsible for orbit
analysis, scheduling, resource management, command sequence generation, and
interfacing with experiment teams. The operations control system is responsible for
uplinking commands, downlinking, processing, and distributing data, monitoring health
and status, and responding to contingency situations. The operations and spacecraft
performance assessment system is responsible for assessing the performance of both the
spacecraft and the ground operations systems. The flight operations network consists of
the Mission Operations Center (MOC) at the Johns Hopkins University Applied Physics
Laboratory (APL), the SBV Processing, Operations and Control Center (SPOCC) at the
Massachusetts Institute of Technology Lincoln Laboratory (LL), and access to the Air
Force Satellite Control Network (AFSCN) via the USAF Space and Missiles Center Test
Support Complex (TSC).
Collection of science data to fulfill mission requirements has been segmented into eight
separate "phenomenology based" experiment classes which are: earthlimb and aurora
backgrounds (EL), celestial backgrounds (CB), shortwave terrestrial backgrounds (ST),
contamination (CE), surveillance (SU), data certification and technology transfer (DC),
theater midcourse targets (TM), and early midcourse targets (EM). The onboard collection
of science data associated with a specific experiment plan is termed a "data collection
event" (DCE). Table 1 summarizes the number of experiments and DCEs tentatively
planned for conduct on MSX during the first 16 months. On average, four to five DCEs are
conducted each day totaling approximately 6-8 gbytes of data per day.
Table 1
Tentative MSX Experiments and Data Collection Events During First 16 Months
EL CB ST CE SU DC TM EM Total
No. of Experiments 17 8 5 13 11 47 4 6 111
No. of DCEs 431 397 27 209 102 861 11 92 2130
Several internal and external constraints affect the collection of science data onboard the
MSX spacecraft. The spacecraft, including all subsystems and sensors, must be properly
maintained to fulfill its mission life expectancy. Also, the surrounding space environmental
conditions, such as the sun, impose constraints on the spacecraft that affect the amount and
quality of science data that can be collected. The flight operations team verifies that these
constraints are not violated, unless planned.
Two solar cell arrays provide the primary source of electrical power onboard the
spacecraft. One 50 A-h rechargeable Nickel Hydrogen battery provides secondary power
during eclipses or periods when additional power loads are placed on the spacecraft. Due
to the physical characteristics of the battery, the extent of battery depth-of-discharge
(DoD) - percentage of power consumed - must be monitored and controlled to maintain
optimal performance levels. Large and frequent occurrences of battery DoD will
significantly degrade the performance and life of the battery; therefore, battery DoD is
limited to 40%. For special target DCEs, DoD is permitted to reach as low as 70%.
The spacecraft utilizes internal (e.g., heaters) and/or external heat sources (e.g., sun, earth,
etc.), or lack thereof, to maintain thermal equilibrium. Certain actions can exceed the
equilibrium beyond the control of the spacecraft; therefore, care must be taken in the
planning of DCEs to avoid temperature extremes. Long recovery times may impact the
execution of subsequent DCEs. Several sensors and subsystems have thermal constraints
that must be enforced to prevent severe data degradation and/or equipment damage. For
example, the SPIRIT III baffle must be maintained below 70 K to properly collect data and
below 40 K, on average, to minimize excessive cryogen use. If the baffle temperature
exceeds 140 K, permanent degradation of the primary mirror is likely.
There are two redundant tape recorders onboard the spacecraft which serve as the primary
devices for recording science data. The data capacity of each tape is 54 Gbits which is
equivalent to 180 minutes at 5 Mbps or 36 minutes at 25 Mbps. The data format is Last In
First Out (LIFO). The tape recorders are recorded until full and played back until empty to
maintain uniform wear on the tape media. The tape recorder is limited to continuous
recording of 36 minutes in 25 Mbps mode or 60 minutes in 5 Mbps mode per tape
recorder. Continuous playback (25 Mbps) is limited to 13 minutes due to tape head
temperature.
The sun is a high intensity light source and can degrade or even damage any of the sensors
if they are pointed directly toward it. The moon and earth are moderate light sources and
would only degrade or damage the sensors if viewed with improper settings. The onboard
Attitude Processor (AP) implements keep-out-zone (KOZ) avoidance logic for the sun and
earth to prevent the field-of-view (FOV) of the sensors from slewing in front of the sun and
earth unexpectedly during maneuver transitions.
The South Atlantic Anomaly (SAA) is a depression in the magnetic field centered off the
coast of Brazil and occupies approximately half of the southern hemisphere. Many DCEs
execute outside the SAA to avoid possible interference and damaging effects on the
spacecraft and sensors. Spacecraft pointing toward the velocity vector causes space
particles and contaminates to impinge on the surface of the sensor apertures and degrade
their performance; therefore, pointing is restricted within 25° of the velocity vector for an
extended amount of time.
OPERATIONS PLANNING PROCESS
Operations planning is accomplished in four stages: long range planning, monthly planning,
weekly planning, and daily planning (Figure 2). Each of these phases occurs
simultaneously and continuously. DCE analysis functions are fundamentally the same
during each planning phase; however, the timelines and emphasis vary as event execution
approaches.
MPT Monthly
Month 3 Month 4 Month 5 Month 6
Objectives Gen
Monthly Planning
Month 3 Planning Operations Month 4 Planning Operations Month 5 Planning Operations Month 6 Planning Operations
Process
Daily Planning Month 3 Month 3 Month 3 Month 3 Month 4 Month 4 Month 4 Month 4 Month 5 Month 5
Process Wk 1 Wk 2 Wk 3 Wk 4 Wk 1 Wk 2 Wk 3 Wk 4 Wk 1 Wk 2
Long-range planning is the phase where the feasibility of each experiment is determined.
Feasibility analysis is the evaluation of a proposed experiment using a representative set of
spacecraft commands to determine whether or not the experiment can be supported by the
MSX system, taking into account the spacecraft and ground support network capabilities
and constraints. Monthly planning commences every four weeks when a BMDO-led
Mission Planning Team (MPT) evaluates experiments that have been declared feasible and
generates a set of monthly objectives, which are transmitted to the APL Operations
Planning Center (OPC). Four weeks of planning are required to plan a month (28 days) of
spacecraft activity. The monthly planning process ends with the distribution of a monthly
schedule which is delivered two weeks prior to the start of the month being planned.
Weekly planning begins two weeks before the start of the week being planned. Orbit
propagation and analysis are performed, and the orbit analysis files are used to support
analysis of all scheduled DCEs. The OPC performs this event analysis to ensure that all
DCEs remain within their allocated resource "cost budget" (e.g., battery DoD, SPIRIT III
baffle temperature), and to ensure that changes in the DCE execution time (T-zero), since
monthly planning, are within acceptable limits. The weekly planning process ends with the
distribution of a weekly schedule.
Orbit propagation and analysis are performed again at the start of each daily planning day
to provide the latest orbit geometry data available in support of final DCE analysis and for
planning of the day's ground station contacts and science data downlink events. The OPC
also plans, schedules and analyzes all spacecraft maintenance, ground contact, and science
data downlink events for the day being planned. At the conclusion of this daily planning
activity, the OPC generates, verifies, and distributes event command sequences, daily
schedules, ground contact plans, and daily plans for onboard tape recorder usage.
Several types of analysis, grouped into three general areas, are performed in the OPC to
support planning and scheduling: orbit analysis, opportunity analysis, and event analysis.
Orbit analysis predicts the spacecraft's position, velocity, and attitude as well as orbit
milestones and the visibility of the spacecraft from specified ground stations using a
spacecraft state vector received from the TSC. The state vector is validated and then
propagated for the month/week/day being planned, and the results are stored in files which
are used to support subsequent event analysis. Opportunity analysis is the identification of
opportunities to execute a DCE associated with a feasible experiment in the time frame
currently being scheduled. Event analysis includes the functions of kinematic and
engagement analysis, power/thermal analysis, and cost analysis. Event analysis also
includes an automated verification of proper spacecraft command usage based on a set of
rules, and a summary of spacecraft operational constraint violations. The OPC’s Constraint
Checker software ensures that no constraints will be violated that affect the success of the
event, degrade data quality, and/or damage spacecraft equipment. Constraints are
separated into “hard” and “soft” categories. Hard constraints are those which are damage
related or involve physical design limitations. Soft constraints are those considered costly
in terms of spacecraft resources. After a DCE is created, the user must run its command
sequence through the OPC’s Rule Checker software which uses a set of usage or logic
rules that verify that a user has assembled the commands together into an event that will
execute safely. OPC software also produces “Spacecraft Cost Reports” which include an
overview of the predicted “costs” of the DCE. These costs include: tape recorder storage,
command memory storage, cryogen depletion, and power usage.
CONTROL OPERATIONS
Daily planning products are distributed to the APL Mission Control Center (MCC) from
the OPC in two groups. Both groups of products contain contact plans (detailed
instructions for the execution of APL contacts), command sequence files and reports for
DCEs, and the schedule of operations for the upcoming 12-hour and following 24-hour
periods. The command sequence files for each event are processed in the MCC and
uplinked to the spacecraft in the 24-hour period from 0400 GMT on Day D (product group
1) to 0400 GMT on Day D+1 (group 2). Once loaded into the spacecraft command
processor memory, the DCEs will typically execute within the following 12 to 18 hours.
Recorded data from those events will typically be downlinked 12 to 24 hours after
execution. Total data collection turnaround time [from sending of the file to MCC to
receipt of prime science data in the APL Mission Processing Center (MPC)] is estimated
to be between 48 and 60 hours. MCC software takes the contact plans, assembles the
maintenance events called out from files already resident in the MCC, and processes the
command sequence files for each DCE and each APL downlink event (i.e., a set of
spacecraft X-band/S-band/tape recorder on/off control commands) sent with the contact
plans. This processing takes the form of merging the discrete event files, by time of
planned execution, into one file, referred to as a “runstate,” for each contact. It is then the
runstate that is executed on the spacecraft by MCC personnel during the planned contact.
Every contact is planned to have certain operations conducted. There are routine, every
contact, maintenance events which are required for the determination of spacecraft health
and status. Additional maintenance events will be scheduled for conduct on a less frequent
basis. Typically, every APL contact will have prime science X-band telemetry downlinked
from the spacecraft tape recorders. In addition, two APL contacts every day will also be
devoted to uplinking a block of “delayed commands” to be executed at a later time out of
UT memory. Each contact plan contains instructions for the execution of that specific
contact in terms of what maintenance events are scheduled, whether there is to be prime
science (25 Mbps) or wideband (1 Mbps) data downlinking, or both, and what, if any,
delayed execution commands are to be uplinked. Figure 3 illustrates the flow of operations
during APL contacts.
Science data consists of prime science, that data recorded by the spacecraft tape recorders
during DCEs, and wideband science which is data collected and stored by the SBV sensor
during space surveillance DCEs. As shown in Figure 3, the average contact duration
usable for prime science data downlinking (i.e., tape recorder playback) is approximately
12 minutes. (The downlinking of prime science data is constrained to occur at elevations
greater than 5° to avoid multipath effects which would otherwise be present in the X-band
link.) Due to the properties of the MSX orbit, ground station contact over the APL site is
limited to five to six contacts per day, which occur in two contact clusters. To provide for
a margin of safety for surrounding properties, ground antennas are limited to travel no
lower than 2° above the horizon. MSX flight operations has also instituted a procedural
constraint to not plan any contacts with unimpeded durations of less than six minutes.
These constraints reduce the useable contact frequency for the APL ground site to four to
five contacts per day. Since the APL MOC is also the only site which can receive X-band,
approximately 40-60 minutes of prime science data can be downlinked per day. This
affects the duration and number of DCEs that can be performed each day; a few DCEs
recording at 25 Mbps can easily fill up both tapes. All uplink of commands and downlink
of wideband science data are also performed at the APL MOC site. The uplink data
channel is serial so that real-time commanding and loading of delayed commands must be
performed separately. Because of the quantity of data involved, available contact time is
maximized by having the commands that perform downlinking (KG encryptor settings, X-
band transmitter and gimbal commands, and tape recorder playback commands) execute
out of onboard command memory. Wideband science data is downlinked via real-time
commands issued from the ground.
X-Band X-Band
Transmitter ON Transmitter OFF
0 1 2 3 4 5 6 7 8 9 10 11 12 13
Horizon Rise
commencing the uplink, synchronization on the X-band data received from the spacecraft
is required to ensure that there are no timing conflicts occurring within the C&DH
subsystem which might adversely affect the downlink. Time during the contact must also
be allocated to allow for the uplinking of commands for real-time execution by the
spacecraft, such as the commands to downlink wideband science, perform any
maintenance activity, and, if necessary, cancel delayed commands previously uplinked. In
addition, time must be available for uplinking commands for any events that were to have
been uplinked previously but were not.
In the actual uplinking process, time ordered delayed execution commands for both DCEs
and APL downlink events are grouped into 100 command message blocks. This blocking
allows a transmission rate of 200 commands per minute to be realized. Once all the
message blocks are uplinked, the spacecraft is commanded to transmit back to the ground
site, the contents of the memory locations just loaded. These retransmitted contents are
compared and verified in MCC to an expected image of the memory locations. This
dump/compare/verify process must be taken into account in the overall timeline;
experience indicates that this process requires approximately 90 seconds. Miscomparisons,
typically caused by data dropouts due to signal fluctuations in either the initial or return
transmission, if they occur, require retransmission of unmatching memory locations during
the remaining time available during the contact (or if necessary, during the follow-on
“backup” contact). Based on previously discussed constraints, the amount of time
available for uplinking of delayed execution commands is reduced to be on the order of six
to eight minutes. The number of delayed execution commands (DCEs plus APL downlink
events) that would be planned for uplinking in one contact, at this time, is conservatively
limited to 1200. With five APL contacts per day and eight minutes of uplink time per
contact, a total of 40 minutes is available for uplinking commands in a day; however, a
programmatic requirement to provide backup contact time in the event of contingencies
reduces the amount of planned uplink time by a factor of two. Thus over the period of one
day, only approximately 20 minutes of contact time can be used for uplinking. This results
in 4000 commands per day that can be planned for uplinking; subtracting out the downlink
events (approximately 100 commands per planned contact) leaves about 3500-3600
commands per day for DCEs.
While all DCE command uplinking and prime science data downlinking are performed
only at APL, spacecraft contact with other ground sites is required to support ephemeris
accuracy requirements. Without spacecraft ranging capability at APL, ephemeris
determination using ranging data from AFSCN sites is done by the TSC. In order to meet
accuracy requirements, the TSC will schedule and plan for six to eight AFSCN sites per
day for operation with the spacecraft. While no delayed command uplinking of DCEs will
be performed by TSC, in the event of an emergency, the capability exists to send
commands to the spacecraft from the MCC at APL through TSC. TSC’s planning of
AFSCN contacts consists of the transmission of commands to set up the spacecraft’s S-
band transmitters for turn-on at later times (over the planned AFSCN sites). In addition to
performing ranging duties, the TSC will transmit commands for real-time execution of
maintenance events that allow for the monitoring of spacecraft health and status.
SUMMARY
The collective suite of MSX sensors and supporting subsystems provide a broad range of
data collection potential; however, operational constraints represent a significant challenge
to fulfilling this potential. The flight operations system has been designed to meet this
challenge while minimizing operational costs. This has been accomplished by balancing
mission goals with operational constraints via a hierarchical planning process for
spacecraft scheduling, resource management, and constraint adherence coupled with
robust operations control timelines and techniques which account for backups,
contingencies, and ground station obscura. The resultant flight operations system should be
able to achieve the ambitious mission goals while still ensuring that the spacecraft is
operated in a safe manner and will meet its design lifetime.
MSX MISSION OPERATIONS CENTER
ABSTRACT
The Mission Operations Center (MOC) at APL is the first processing link in the MSX data
system. Two key components of the MOC that play a role in the telemetry acquisition and
processing functions are the Mission Control Center (MCC) and the Mission Processing
Center (MPC). This paper will present a summary of the telemetry acquisition and data
processing structure built to handle the high volume of MSX data and the unique hardware
and software systems to perform these functions.
The primary responsibility of the MCC is to maintain the health and safety of the MSX
spacecraft. This is accomplished by communicating with the spacecraft through the APL
stations and the AFSCN. The MCC receives the spacecraft housekeeping 16 Kb telemetry
stream and commands the spacecraft via the 2K command link. Due to the complexity of
the spacecraft various analysis tools exist to evaluate the spacecraft health and to generate
commands for controlling the spacecraft.
The primary responsibility of the MPC is the initial processing of the 1Mb and 25Mb
spacecraft science telemetry streams. The science data is recorded in a raw format, both
analog and digital, and a digital 8 mm tape format, Level 1A tape, which serves the MSX
program as the transport media and format for science data dissemination. The MPC also
collects downlink data from the MCC and planning products from the Operations Planning
Center for inclusion on the Level 1A tape to enable the MSX data community to analysis
the data. This data is sent electronically to the MPC via a LAN. One of the key products
provided on the Level 1A tape from the MCC is a measure of the spacecraft clock against
time standards.
The MPC consists of a hardware front end for the capture and formatting of the science
data and a computer system for the processing of the formatted science data to produce
Level 1A tapes. The hardware front end includes wideband analog recorders, decryption
devices, data selectors, bit sync, and frame syncs. One of the unique features of the 25 Mb
telemetry stream is that is transmitted to the ground in the reverse direction. The MPC
must then reverse the data again which is accomplished via analog recorders in order to
perform further processing. The computer system consists of three model VAX 4000
computers with 107 Gb of disk space and 12 8 mm tape drives. One VAX is task with
reading the 25 Mb telemetry onto the disk. The second VAX reads to the 1Mb telemetry
onto the disk and produces a digital 8 mm tape of the raw data. The third VAX is tasks
with processing the data and writing the Level 1A tapes. The systems architecture is such
that while today's data is being downlinked yesterday's data is being processed and written
to Level 1A tapes. Custom software was developed to perform the processing and data
management within the MPC.
INTRODUCTION
The task of operating the MSX spacecraft in fulfillment of its mission to collect various
kinds of observation data was different into two different areas; one being spacecraft
command and control and the other being spacecraft science data processing. The Mission
Control Center is charted with command the spacecraft and monitoring the housekeeping
telemetry to ensure the health and safety of the spacecraft. The Mission Processing Center
is chartered with processing the science data from the spacecraft and disseminating the
data to the MSX data community.
MCC
The MCC consists of the hardware and software necessary to command, control, and
monitor the MSX spacecraft. While providing the capability to format a command, it also
provides the ability to receive, decommutate, distribute, and process the MSX
housekeeping telemetry. This section will focus on the processing, interpretation, and
display of the raw telemetry processed in the MCC.
The 16 kbps telemetry contains the “housekeeping” information of the various subsystems
and instruments on-board. This 16 kbps telemetry may also contain particular “dump”
formats which contain special information from the various on-board processor’s memory.
The MCC software distributes this information to various workstations throughout the
MCC where the processing of the telemetry is performed. This method of distributed
processing allows for more efficient TM processing in that only those parameters
displayed, are processed at each individual workstation as opposed to all parameters being
processed and then distributed for display. Because of the various different formats the 16
kbps may contain, many specialized software tools have been developed to process and
display this information to the controllers.
TELEMETRY DISPLAY
The displaying of telemetry in the MCC consists of some high level graphical interfaces as
well as that of raw numbers and their conversion to enumerations or engineering unit
values. The graphical interfaces include the use of a COTS product called DataViews.
This interface displays the information in a system level block diagram format where
actual numbers are included along with colors to indicate alarms as well as configurations.
This type of display allows for many parameters to be grouped together to provide a “big
picture” view. Other views are more at a subsystem level, providing more details when
necessary.
A custom-made graphical display tool is called SEEMSX. This tool displays the attitude of
the spacecraft based on an interpretation of the housekeeping data and indicates its offset
from the normal PARKED mode orientation. This housekeeping data would normally
include a series of engineering units. SEEMSX actually displays a physical model of the
spacecraft and its orientation relative to the sun and earth in addition to information
concerning the status of other spacecraft functions, including the orientation of the X-band
antenna for prime science downlink.
Also in the MCC is a real-time plotting capability which allows for strip-chart type of
visualization on any telemetry parameter. This is useful in trending a parameter over
several station contacts. Multiple parameters may be displayed simultaneously.
TELEMETRY CONVERSION
The Telemetry Dictionary provides the method by which the standard “housekeeping” data
is decommutated and converted into engineering units. The Telemetry Dictionary is
maintained in an Oracle Database which contains the byte offsets from the Major Frame
for decommutation, and scale factors, offsets, lookup tables, and polynomials required to
convert each housekeeping parameter for display in engineering units. Additional
parameters added to the Telemetry Dictionary include “special conversions”, which are
based on other parameters tied together through a mathematical algorithm. These types of
“derived” parameters lead to higher level indications of the overall status of a component
or subsystem. Such parameters may include adding all of the spacecraft individual current
monitors to indicate the total load on the power system.
ALARM PROCESSING
The Telemetry Dictionary also contains alarm information for the different telemetry
parameters. These alarms consist of both yellow and red conditions, where yellow
indicates a warning and red, that of a problem. The Alarm Status Window reports red,
yellow, and green alarms of all parameters for which alarm information is contained in the
Telemetry Dictionary. These alarms are time-tagged with the first alarm occurrence and
re-occur once a minute until acknowledged by the controller. An audible alarm occurs
upon the change of status in the window. A Green condition only appears when a yellow
or red alarm returns its state back to nominal. This alarm notification ability provides a
method of informing the controller of any parameter’s status without actually viewing
those parameters on a display.
SPECIAL UTILITIES
Although the conversion and display of normal housekeeping data are the majority of the
processing done on telemetry in the MCC, various other tools are used for analyzing the
information contained in the various formats other than the housekeeping telemetry. Some
are used for real-time performance assessment, while others are used in anomaly
investigation. Several of the more essential tools are discussed below.
The MSX Command Processors each contain a buffer, called Command History, which
holds the last 500 commands executed along with their time of execution, command source
and status of execution. As part of the normal housekeeping data, two commands from the
Command History buffer and the last command executed are downlinked per second. In
addition, there is the capability to “dump” the Command History buffer to obtain the entire
contents at once. Software was developed to interpret the information and display it as it is
received in real-time and then sorted chronologically after the contact. This utility is
essential in anomaly investigations to determine the actual “as-flown” command sequence.
MEMORY VERIFY
The On-board Processors are routinely commanded to “dump” their raw memory such that
an “actual” image of their stored command sequences may be compared to that of an
expected version based on what was uplinked. The Memory Verify Utility performs the
maintenance of these “actual” and “expected” images along with the comparison and
reporting of differences. This utility is necessary to determine if memory was loaded
correctly.
AUTONOMY DISPLAY
The MSX Command Processors possess an autonomy feature which allows for logic to be
specified between telemetry parameters to determine anomalous conditions in the form of
rules. If such anomalous conditions are detected, the Command Processor will issue a
series of pre-defined commands in an attempt to correct the situation. If such an event
should occur, the only method to determine which set of the logic or rules were
implemented is to “dump” a portion of that Command Processor’s memory. The
Autonomy Display Utility was developed to recognize and decommutate this information
based on the format of the downlink, and display it in a tabular format such that an
evaluation may be made by a controller as to the status of the on-board autonomy.
HEX DUMP
There are particular “dump” formats for which there is no conversion to engineering units.
In these cases, there is no other method to decipher the data than to look at it raw. The
Hex Dump Utility provides a method of displaying the raw data in hex format per major
frame. Each major frame being divided into minor frames. This is the brute force method
of determining what was downlinked in telemetry when no other method has been
developed. This is essential when analyzing particular “dumps” from processor memory
which are not downlinked routinely and therefore not routinely decommutated.
ARCHIVED TELEMETRY
Following a station contact, the 16 kbps received in the MCC is saved in an archive file.
There are several tools which allow for this archive file to be used in the investigation of
anomalies which may have occurred during the station contact.
The Telemetry Archive Playback allows the controller to “re-play” the archive file or a
portion thereof, back through the system as though it were being received in real-time. In
addition to being an essential tool when investigating anomalous conditions, it is also
useful when several important events occurring during a contact, prevent the controller
from verifying their successes all at once.
ENGINEERING DUMP UTILITY
The Engineering Dump Utility allows the controller to strip out of an archive file, up to
seven items from the Telemetry Dictionary, convert and display them on the screen or save
them to a file for printing. The constraint of seven was driven by the maximum number
which could be displayed on the screen. The format of the output is a tabular listing. This
is essential in the investigation of anomalies to determine when exactly a particular
configuration was achieved.
Similar to the Engineering Dump Utility, the Engineering Spreadsheet Utility provides the
method for extracting as many telemetry parameters from the archive file as desired. The
input is an ASCII text file which lists the names of the parameters as specified in the
Telemetry Dictionary Report. The output is a tab delimited ASCII formatted text file
which imports directly into many spreadsheet software packages such as Excel.
MPC
Before any data flowed from the spacecraft to the MPC the data needed to be documented
in detail. The MSX Mission Operations Center Data Products Document was disseminated
to the MSX data community to enable them to design the various systems which would
receive and process the science data. This document was first released in 1992 with a
major rerelease in 1994. The final version was released in 1996. This document provided
the detail which governed the design of the data products as supplied by the MPC. The
format of the Level 1A tapes is described in pain staking detail. Each day the MPC
produces one set of Level 1A tapes which consists of one or more tapes for each of the
Data Processing Centers, UVISI, SPIRIT III, SBV, OSDP, and Contamination, as well as
a copy of these tapes for the Background Data Center which is the archive facility for the
program. Each tape contains data unique to each instrument as well as ancillary files, i.e.,
files from other portions of the MOC.
The challenge of the MPC is to record and process 12Gb of science data each day from
the MSX spacecraft in addition to 3 Gb of operations data. This challenge was meet by
designing a system which performs simultaneous data collection and processing functions.
The MPC design supports 12 Gb or 50 minutes of 25Mb downlink per day and 14 1Mb
downlinks per day. This data collection function is accomplished by utilizing a hardware
front end and VAX computer system for storing the data on disk in both realtime and
nonrealtime operations. While in the collection mode the MPC receives operations data
from other parts of the MOC for inclusion on the Level 1A tapes. When the data for a day
is "collected", the MPC passes the data to another member of the VAX computer cluster
and enters a processing mode which first sorts the data and then writes the Level 1A tapes.
At the conclusion of the processing mode any data which is incomplete is copied to the
data collection system for further processing on the next day. This ping-ponging of
collection and processing continues for the mission.
The MPC hardware end front captures data on the 25Mb and 1Mb telemetry streams on
analog tape and formats the data on a major frame basis as transmitted from the spacecraft.
The hardware front end incorporates one of basic principles of telemetry capture: record
the data in an unprocessed format should any of the processing equipment fail. This
concept permits the MPC to post pass reprocess the data should it be necessary due to
equipment problems during the pass. During a contact the prime 25 Mb data is stored in
decrypted form on analog recorder. After the pass the 25 Mb data is playback at half speed
for storing on the MPC disk array. During a spacecraft contact the 1Mb data is also stored
on the computer disk while being stored on the analog tape.
The 25Mb science data, prime science, as transmitted from the spacecraft is playback from
the spacecraft tape recorders in the reverse direction. This is done to extend the life of the
spacecraft tape recorders. The MPC turns the prime science around by also playing the
decrypted analog data in the reverse direction into the MPC VAX computer.
The MPC computer system is a tri-hosted cluster of VAX 4000 computers which all have
access to 107 Gb of disk space for the storage of spacecraft science data. The spacecraft
science is only written to the disk and read one time for writing on Level 1A tapes. During
the process of writing the science data to the disk a subset of data is extracted for
subsequent processing. This subset, pointer files, completely describes the data and where
it is stored on disk. The pointer files are used by DBase IV for time ordering the data. In
addition to time ordering the DBase IV application also handles any overlap due to the
stop and start of two contacts and fills in any gaps in the data with fill bytes. The DBase
IV application organizes the pointer file for generation of Level 1A tapes. At the
conclusion of the data processing, a tape writer program begins the process of writing the
Level 1A tapes. Eight Level 1A tapes are written simultaneously. The UVISI, SBV,
SPIRIT III, and OSDP data sets are written with the Contamination data being written to
disk files. At the completion of the writing of these four data sets the Contamination Level
1A tapes are written. The tape writer program can handle up to four of each type of Level
1A tape per day.
MOC NETWORK
The MOC network connects all parts of the MOC and includes a mail server for the
transmission of electronic products on an automated basis. The design of the MOC
included the realization that certain data products as documented in Reference 1 were
needed to perform operations. The MOC server handles the routine dissemination of these
products to the various portions of the MOC on an automated basis using software
specifically developed for this purpose. Since this network deals with both classified and
unclassified data, the MOC server also records all transactions to satisfy security
requirements of electronic file transfer.
The MOC network and server forward the operations products to the MPC which the
MSX data community will need to analysis the data and understand the spacecraft
operation.
CONCLUSION
The MCC and MPC provide tools, facilities, and systems for the handling of both
spacecraft housekeeping telemetry and science data which enable the MSX spacecraft to
fulfill its mission.
REFERENCE
ABSTRACT
This paper will discuss the functions performed by the Spatial Infrared Imaging Telescope
(SPIRIT) III Data Processing Center (DPC) at Utah State University (USU). The SPIRIT
III sensor is the primary instrument on the Midcourse Space Experiment (MSX) satellite;
and as builder of this sensor system, USU is responsible for developing and operating the
associated DPC. The SPIRIT III sensor consists of a six-color long-wave infrared (LWIR)
radiometer system, an LWIR spectrographic interferometer, contamination sensors, and
housekeeping monitoring systems. The MSX spacecraft recorders can capture up to 8+
gigabytes of data a day from this sensor. The DPC is subsequently required to provide a
24-hour turnaround to verify and qualify these data by implementing a complex set of
sensor and data verification and quality checks. This paper addresses the computing
architecture, distributed processing software, and automated data verification processes
implemented to meet these requirements.
KEY WORDS
1. INTRODUCTION
The Midcourse Space Experiment (MSX) program is sponsored by the Ballistic Missile
Defense Organization (BMDO) to support its objectives to detect, acquire, and track
targets, and to discriminate lethal from nonlethal objects. The MSX mission objectives are
to measure the spectral, spatial, and radiometric parameters of various orbital and
suborbital targets, celestial sources, zodiacal emissions, the earth’s airglow, the aurora,
and other upper atmospheric phenomena. The measurement of potential target objects and
their associated phenomenology with terrestrial, earthlimb, and celestial backgrounds are
key to the success of the MSX program. The MSX program Principle Investigator Teams
are dedicated specifically to various aspects of the mission objectives: target functional
demonstration and phenomenology, background phenomenology, and calibration and
sensor characterization.1
The primary instrument aboard the MSX spacecraft is the Spatial Infrared Imaging
Telescope (SPIRIT) III. The SPIRIT III sensor was developed by the Space Dynamics
Laboratory of Utah State University (SDL/USU) and consists of an off-axis reimaging
telescope with a 35-cm-diameter unobscured aperture, a six-channel Fourier-transform
spectrometer, a six-band scanning radiometer, a cryogenic dewar/heat exchanger, and
instruments to monitor contamination levels and their effects on the sensor.1, 2 As developer
of the SPIRIT III sensor, SDL/USU is also responsible for developing and operating a
Data Processing Center (DPC) for this sensor. The SPIRIT III DPC is one element of the
overall MSX Data Management system.
Data Management has been a key part of the MSX program since its inception. The MSX
Data Management system is a distributed approach as opposed to a centralized processing
concept. The data flow of the MSX program is illustrated in Figure 1. The raw (Level 0)
data are downlinked from the spacecraft to the Mission Control Center (MCC). These data
are then sent to the Mission Processing Center (MPC) for sensor data separation,
pre-downlink reconstruction, and other processing of telemetry data that is required. The
output of this processing is called Level 1A data. The Level 1A data sets are sent to the
Data Processing Centers and, in parallel, to the Data Analysis Centers (DACs), where the
Principle Investigator (PI) Teams reside and analyze the data. The Level 1A data are
processed by the DPCs. The results of this processing are the updated calibration
coefficients and the health status of the instrument, which are then sent as data products
through the Backgrounds Data Center (BDC) to the DACs. These resultant files, along
with the Level 1A data, are input into a software package called Convert. The Convert
software applies the calibrations to the Level 1A data, resulting in corrected count
(Level 2) data and engineering unit (Level 2A) data.
The SPIRIT III DPC is tasked with monitoring and verifying all data received from the
SPIRIT III sensor in conjunction with the data’s release to the scientific community for
analysis. The PI Teams cannot perform certified processing on this sensor’s data until they
have received the results of the DPC’s data verification. The center is therefore required to
provide a 24-hour turnaround to verify and qualify the 8+ gigabytes of data a day that can
potentially be captured from the SPIRIT III sensor by the MSX spacecraft recorders. To
meet these performance requirements, the DPC has implemented a fully automated,
distributed computing system utilizing state-of-the-art computer hardware and software
technology.
Figure 1. MSX Data Flow
3.1.1 Hardware
The SPIRIT III DPC’s hardware configuration includes multiple workstations for sensor
data processing implemented as a loosely coupled, Fiber Distributed Data Interface
(FDDI) network of Silicon Graphics computers under the IRIX Operating System. This
processing Local Area Network (LAN) is linked to another LAN that allows sensor
engineers (Performance Assessment Team) access to the processing results in support of
sensor characterization efforts.
Essentially, the SPIRIT III data processing center contains an FDDI subnet comprised of
two 8-processor Silicon Graphics Incorporated (SGI) Onyxs and five Indigo model
computers. One of the Onyxs also acts as the network file server. The SPIRIT III data
processing software can be hosted on any combination of Onyx processors and Indigos. In
the current configuration, the two SGI Onyxs contain 250 Mhz, MIPS R4400 processors.
Each Onyx contains 1 GB of memory. Each Indigo contains a single 150 Mhz, MIPS
R4400 processor with 96 MB of memory. Figure 2 illustrates the basic hardware
configuration. Although data processing rates have been well within DPC requirements,
the network transfer rates remain a limitation to the performance of the SPIRIT III data
verification software.3
Figure 2. SPIRIT III DPC Hardware Logical Configuration
3.1.2 Software
The suite of software programs designed to process data from the SPIRIT III portion of the
MSX spacecraft is referred to by the MSX community as the SPIRIT III Pipeline. The
Pipeline is a distributed application consisting of a data reformatter (Spooler), a graphical
user interface (GUI), a Dispatcher program, and a Compute_Node process. The Spooler
reformats the data from magnetic tape onto disk media. The GUI provides the user with a
quick, simple method of controlling the processing. The Dispatcher controls the job
assignments to the computers on the subnet and the Compute_Node component is
responsible for processing the sensor data.
The strategy for the Pipeline software development was to minimize the complexity of the
distributed solution. The number of software components and the communication between
them were limited. The software was written in ANSI C, incorporates exception
processing, and exploits both the native capabilities of the network and Parallel Virtual
Machine (PVM) software for interprocessor communications and control. PVM is a
software package that allows a network of computers to appear as a single, large
distributed-memory computer.4 The transfer of data between computers is restricted to
native UNIX remote access commands for the sensor data software component. The
nature of the data is also exploited to create manageable sized data sets that can reside in
memory during processing. Additionally, use of these self-contained data sets reduces the
data coupling among sensor data processing components. With all of these aspects
combined - a loosely coupled solution that dissociated the sensor data processing from the
distributed processing environment, a simplified error processing scheme using exception
processing, and a limited software configuration - the Pipeline suite of software programs
has proven resilient and compatible with the dynamics of sensor data processing.3
Automated data verification is provided by the DPC’s Pipeline software. After the Level
1A tapes are spooled to disk, operators use the GUI to create a processing template. Once
the Pipeline processing has been initiated, the Dispatcher assigns processing tasks to the
compute nodes according to the processing template. The verification output products
(data and instrument products) that result from processing are tracked by the DPC SPIRIT
III Information Management System (SIMS) and also sent to the BDC for distribution to
the MSX community. Figure 3 illustrates the automated Pipeline data verification process.
The details of the data verification processes performed by the DPC Pipeline software
suite are described in the following sections.
The MSX Spacecraft downloads data from orbit in 25 Mb/s, 5 Mb/s, or 1 Mb/s telemetry
rates. The MPC will send the digitized, decommutated, timed-ordered data to the DPCs in
8mm magnetic tape format. The Spooler component of the SPIRIT III DPC’s software
suite reformats the data onto disk. As part of this process, the Spooler exploits the
regularity of the sensor data and breaks the data into discrete, self-contained data sets
which facilitate the distributed processing performed by the Pipeline. Any
conversion-required ancillary data are supplied in file headers.
Data are divided into sets based on their systematic arrangement. Both the radiometer and
the interferometer produce data based on regular scans. Interspersed among the
instruments’ scans are data resulting from the execution of sensor command macros
created for on-orbit calibration measurements. Both types of data provide a convenient
opportunity to break the data into discrete files. The radiometer can operate in three mirror
scanning modes in addition to a fixed mirror mode. For the radiometer, a period of
approximately 6 seconds allows the data in any of the mirror modes to be broken into
statistically equivalent files. These files are the basic processing unit for the distributed
processing performed by the Pipeline. The interferometer downloads much smaller
amounts of data and there is less of a need to break up the data in order to distribute the
workload. The interferometer data are therefore broken into files based solely on the
detection of sensor command macro data. By isolating the sensor command macro data
into individual files, the Spooler enables the SPIRIT III Pipeline to easily identify and
nonsequentially process the calibration measurements prior to their application to the
intervening data.3
During its reformatting process the Spooler provides important data verification functions.
The Spooler first ensures that the data provided on the Level 1A tape are properly
formatted. The Spooler will issue errors if the Level 1A data are not formatted as defined
by the MSX Program. The Spooler will also detect and remove unusable data, such as data
with MSX spacecraft frame sync errors. This significantly reduces the error processing
required by the sensor data processing software.
The automated Pipeline process consists of three major functions and several minor
functions such as long-term trending statistics, cryogen usage estimation, and quick look
product generations. The three major Pipeline functions are as follows: first, ensure that
the instrument is operating within its operational envelope; second, monitor the calibration
of the instrument using data from the onboard stimulators in the start and stop sequences;
third, monitor and calculate the dark offset of each detector in the instrument. Figure 4
shows the detailed data flow for automated Pipeline radiometer and interferometer
processing.
The first step in automated processing is to build mission processing “templates” which
are based on the established bounds of the instrument. The templates characterize the
sensor’s operational envelope by defining acceptable performance parameters including
nominal upper and lower limits, spike detection parameters, noise values, and
rate-of-change limits for voltages, temperatures and other housekeeping parameters. These
templates guide the automated processing system and also include parameters regarding
instrument modes, and timing and file structures. Certified algorithms to compare the Level
1A data against the defined operational envelope are built into the Pipeline software. The
Pipeline compares the data against defined anomaly bounds and identifies data which fall
outside the acceptable parameters. Anomaly, spike, and dropout locations are stored in
anomaly files identifying decertified portions of the Level 1A data. These anomaly files are
delivered to the DACs and used as input templates for observation data conversion
routines.
The second major automated processing function is designed to monitor the instrument
calibration and to measure long-term trends in the instrument performance. This
monitoring includes the assessment of the detectors non-uniformity corrections,
non-linearity corrections, offset correction, gain adjustments and gain normalization
routines. This is done by comparing the expected radiometric and interferometric
responses to standard sequences. These processes use information in the standard
calibration start and stop sequences and information derived during the assessment of
previous performance. This information is contained in configuration files used by the
Pipeline during processing. Quality control routines are applied to the data to ensure that
the instrument and calibration is within the certified operational performance.
The third major automated function of the Pipeline is the generation of the dark offset
matrix (DOM) file used by the DACs for certified processing. The Pipeline generates the
DOM by comparing the starting dark offset sequences with the ending sequence and
interpolates between measurements to compute the DOM. As with the stimulator macros,
the dark offsets are compared against the template to check the limits and ensure the
proper operation of the instrument. This comparison allows the Pipeline to determine
on-orbit characteristics and generate a file containing coefficients to use for dark offset
correction of the Level 1A radiometer data.
4. PERFORMANCE SUMMARY
The SPIRIT III DPC distributed Pipeline software, utilizing the current hardware
configuration, has proven successful in providing the DACs with the required data
verification in a timely manner. Table 1 gives a summary of DPC data processing
performance and software specifications to date.
Parameter Totals
Lines of Code (ANSI C) . 81,275 in Pipeline suite of programs
Throughput .36 Gigabytes/hour average rate (based on an 8-hr day;
includes idle time)
2.5 - 3.0 Gigabytes/hour actual rate
Quantity of Data Processed 196.2 Gigabytes processed as of 7 June 96
5. REFERENCES
2. Space Dynamics Laboratory, SPIRIT III SENSOR User’s Guide, SDL/92-041, 1995.
4. Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Robert Manchek, and
Vaidy Sunderam, PVM 3 User’s Guide and Reference Manual, Sept. 1994.
Ultraviolet and Visible Imaging
and Spectrographic Imaging (UVISI)
Data Processing Center (DPC)
Abstract
The nine sensors and one image processor of the Ultraviolet and Visible Imaging and
Spectrographic Imaging (UVISI) instrument aboard the Midcourse Space Experiment
(MSX) satellite can potentially generate up to three gigabytes of data of data per day. The
UVISI Data Processing Center (DPC) must execute a multitude of complex processing
functions in a 24-hour operational window, verify the UVISI data and also provide a
compact, quantified record of the verification. The Center additionally must support
higher-level data analysis functions. Data processing functions are divided into pipeline
processing and data conversion processing. Pipeline processing, which consists of the
main pipeline process, Pipeline, and several auxiliary processes is responsible for
generating Data Quality Indices (DQI) that summarize sensor performance and Data
Measurement Indices (DMI) that summarize sensor measurements. Both sets of indices
provide scientists and engineers with a compact, easily-reviewed record of instrument
performance. The conversion process, Convert, supports data analysis by converting raw
telemetry into scientific/engineering units. On a pixel-by-pixel basis, Convert provides
functions for dark-correction, flat-fielding, gain and gate adjustment, non-linearity
correction, and count-to-photon conversion. Operating in conjunction with Convert, a
pointing utility, Point, is used to determine the locations of selected objects in inertial
space. The accomplishment of these myriad tasks relies on a state-of-the-art computer
network using multiple workstations. Normal DPC operations are fully automated but
remain flexible enough to allow prompt intervention by the UVISI Performance
Assessment Team (PAT).
Introduction
The UVISI instrument consists of four imagers, five spectrographic imagers (SPIMs), an
image processor (IP), and associated electronics. The nine optical sensors perform
measurements in the wavelength range from the far ultraviolet (~0.11µm) to the near
infrared (~0.85 µm) over a total dynamic range approaching 1010. Each imager nominally
reports 256x244 pixels, while each SPIM nominally reports 272x40 pixels (the 272 pixels
represent a wavelength axis and the 40 pixels represent spatial axis). The sizes of these
arrays can be changed to accommodate various telemetry formats. Two imagers have wide
fields of view of 13°x10°, while the other two imagers have narrow fields of view of
1.3°x1.0°. All imagers have a six-position filter wheel with open, closed, attenuating, and
bandpass filters. All SPIMs have a five-position filter wheel offering wide slits
(1.0°x0.10°) and narrow slits (1.0°x0.05°) with either open or attenuating filters. A scan
mirror on each SPIM sweeps the slit through a range of up to 1° in a perpendicular
dimension, thus adding a second spatial dimension to the SPIM observation. Intensified
charge-coupled devices automatically select sensor sensitivity based on scene intensity.
Operating in conjunction with any single imager, the UVISI image processor allows the
MSX spacecraft to acquire and track either point or extended targets. The sensors also
have data compression options, pixel summation options, gain-control options, frame rate
options, mirror scan options, and image processor options (if the IP is enabled).
The UVISI DPC processes downlinked data through three levels of processing: pipeline,
data conversion and pointing. Pipeline processing involves data verification, measurement
summarization, data cataloging, merging, generation of quick-look products, and
generation of raw, time-ordered telemetry data separated by sensor. The DPC routinely
sends the pipeline products to the Backgrounds Data Center (BDC) for archive and
distribution. Pipeline products are also sent to the Shortwavelength Terrestrial
Backgrounds Data Analysis Center (STBDAC) for scientific analysis and to the UVISI
Performance Assessment Team (PAT) for prompt engineering assessment of the
instrument’s flight performance. Convert processing involves the conversion of the raw
sensor data into scientific and engineering units on a pixel-by-pixel basis along with
calculations of uncertainties. Another capability of Convert is to extract point source
intensities and positions (in inertial space) from either the raw telemetry or time ordered
sensor data. Point can then determine the right ascension and declination of each of the
point sources found in Convert. The DPC must negotiate the dual Pipeline and Convert
processing under the twin pressures of a high data mass of up to 3 Gbyte per day and rapid
turnaround of a 24 hour processing window. Figure 1 shows a simplified chart of the data
flow through the UVISI DPC.
The DPC software effort of Pipeline, Convert and Point started in 1992 and was developed
by one software engineer thus leading to a consistent and low cost design. It began with a
strong interaction with the UVISI engineering team, providing them with utilities to
analyze their data both graphically and statistically. This also helped launch the DPC
software design since an intimate familiarity of the workings of the UVISI data control
system was developed.
Raw P IP E L IN E O pe r a ti on a l
T e l e m e tr y & E n v e l ope
M ER GE
Ca l i br a ti on D ark T i m e or d e r e d
D Q Is D M Is H ou s e k e e pi n g
F a c tor s Cou n ts S e n s or d a ta
CO N V ER T A L U IC E HUI
P oi n t C on v e r te d
S ou r c e s D a ta
P O IN T
S e n s or P oi n t S ou r c e
B or e s i g h t L oc a ti on s
After the basic design was completed a configuration management system was initiated on
the premise that all changes that resulted from software bugs, engineering changes and/or
requirements changes along with their rationale must be tracked and logged. Effecting the
changes consisted of a written change request, followed by an analysis effort and then
coding changes. However, very little time was wasted in this phase due to the fact that the
one person performing the DPC software changes knew completely the inner workings of
all software within the DPC. The software used in the DPC is discussed below.
Pipeline Processing
The Pipeline process is the very first in a series of routine pipeline processes to be
executed upon receipt of the raw telemetry data from the Mission Processing Center
(MPC). Its basic functions include the extraction of all the different types of data,
validation and separation of the sensor data, and summarization of the experiments via
snapshots of calibrated data.
The main purpose of Pipeline is to check for uncertifiable conditions within the telemetry
stream for each sensor. This is accomplished by checking sensor data against the
operational envelope defined for the UVISI instrument. This operational envelope was
developed jointly by the UVISI instrument PAT and DCATT teams over several years
after gaining a high degree of confidence in knowledge of the physical characteristics of
the sensors. It is composed of minimum and maximum values with which to compare the
actual sensor data against, including items such as gain and gate limits, temperature and
voltage limits and dark count limits as well as uncalibrated filter positions, etc., and
provides the basis for the validation of the UVISI sensor data. The results are output to the
DQI files which, in turn, is used by Convert to determine the certifiability of the level 2
data. Pipeline also automatically outputs DMI files which provide a statistical glimpse into
the pixel count levels for each sensor, spectral cross calibration (SCC) files which contain
statistics of overlapping spectral regions between the 5 spectrographic imagers, UVISI
temperature and voltage data, and the time ordered raw telemetry data files separated by
sensor. These time ordered sensor files may be used as input to Convert, as opposed to
using the raw telemetry files, thus significantly reducing execution time. They are also the
primary data source to provide feedback to the engineering, calibration and software teams
since they contain both raw images and sensor status data.
There are several additional processes which are executed upon completion of Pipeline
which are part of the routine verification processing. The UVISI DPC receives an
experiment, or event, in the form of one or more raw telemetry files. Pipeline is designed
to process one file at a time. Since Pipeline does not have prior knowledge of the entire
event as it is processing each telemetry file, we must wait until we are notified that all
telemetry files related to the event have been received. At that point, all event files are
merged together for each sensor and further dark count verification is performed.
The first step in pipeline processing is data extraction. The data extracted from the raw
telemetry data includes sensor, attitude, tracking and housekeeping data. Since the UVISI
instrument was developed utilizing a myriad of data collection modes, an intelligent data
reduction system had to put in place to detect and then decommutate its output. A
hierarchical table driven system, shown in Figure 2, was developed to accommodate all
T elem etr y D a t a C on t r ol S e n s or
F or m a t s Sys t e m F or m a t s F or m a t s
H i R a te 1
14
2 15
L o R a te 2A 16
2B 17
18
L o R a te A lt 1 3 19
3A 20
21
4
L o R a te A lt 2 4A
4B
telemetry formats down to the size and location information embedded in the detailed
minor frame level which is dependent on the mode of the UVISI instrument. This led to a
very efficient and simple software coding effort for reading and extracting the raw
telemetry data, enabling a smooth transition into software maintenance.
Raw telemetry files are laid out such that all of the different types of data can be found in
what is known as a major frame. A major frame contains data from several subsystems: the
attitude processor, tracking processor, and the data control subsystem, which is
responsible for outputting the sensor and housekeeping data. Each major frame contains a
time stamp with each of the different subsystems having a time offset relative to this major
frame time stamp. All of our output products contains both the major frame time stamp and
this relative offset in their record headers to inform the users analyzing the data of the
precise event time associated with each type of data. Computing the time offsets reduced
the complexity of our data extraction task since the different data types would not have to
be tracked and kept in memory for more than one time frame.
An important step in the pipeline process is to retrieve from the sensor data the dark count
levels for each sensor. One of the factors in converting the raw sensor data to scientific
units is the subtraction of the dark count data which is a combination of the scene
background and noise from the sensor electronics. Pipeline is responsible for creating this
data. Anytime the sensors are put into a configuration where the image intensifiers are
turned on and the apertures are closed, an average of each pixel's data during each of these
time periods is stored in a separate file. These files are later merged to form a history of
dark count measurements over an entire event. Convert then uses this merged dark count
file to compute a dark count for each pixel by interpolating between successive dark count
measurements with respect to time.
Data from a single satellite event is received in pieces. Once the data has been extracted to
form basic images it becomes straightforward to “merge” the pieces of an event together.
In other words, for each event there will be only one DQI file, one DMI file, one dark
count file, one temperature and voltage file, one attitude file and one time ordered sensor
data file for each sensor that is used for an event. This simplifies the archival/retrieval
functions over the lifetime of the mission..
During the Pipeline process, the flags within the DQI files pertaining to dark count levels
are automatically set signifying an alarm since these levels are not known until the entire
event is processed. Using a merged dark count file, a verification process reads in the dark
count levels at the beginning and end of an event and compares them to the limits set in the
operational envelope. The dark count flags are then set or cleared appropriately for each
record within the corresponding DQI files.
Convert Processing
The two main goals of Convert are to convert the raw sensor data into optional scientific
units as chosen from a list of options and to detect point sources within each imager's field
of view.
Data from the imager and spectrographic imager calibration files are used both in the
algorithm for converting the raw sensor data to corrected scientific units and for computing
the uncertainties in the results. The algorithm for converting the imager data is relatively
straight forward in that calibration data is extracted on a pixel by pixel basis whereas the
spectrographic imager data has a total of 12 resolutions thus rendering the calibration data
extraction much more difficult. Calibration data must be combined and co-added to match
each particular resolution. The software used in detecting point sources is a modified
version of that used inside the actual image processor on board the MSX spacecraft. It
uses the same techniques such as its background and noise removal, image detrending,
thresholding and finally blob centroiding.
It is important to allow the user flexibility in processing files both from an analysis and a
time standpoint. The user has the option to process portions of files which may be thought
to be the most interesting simply by entering a finite number of time frames. This option
has the added benefit of debugging the software quicker and selecting images within an
event based on DQI flags. Another related option allows the user to process a subset of the
data by specifying which sensors to be selected.
As far as processing speed is concerned, one of the best options the users have at their
disposal is to run using the time ordered sensor data files. This allows Convert to avoid the
tedious task of extracting and decompressing the sensor, attitude and UVISI housekeeping
data. Because these files are separated by sensor, the user may choose to process only a
subset of the 9 UVISI sensors in which he is interested. Therefore, the files corresponding
to the sensors not chosen are not read into memory.
Another issue which may affect performance is the size of the output. There are options to
scale the data when size becomes an issue. If the user chooses to exercise this option the
scale factor and offsets are provided in the output so that the full values can be generated
at a later time.
As is typical with CCD devices, there are usually pixels that fail and therefore need to be
disregarded. We have given the user the ability to substitute any scalar value for these
pixels before they are run through the conversion algorithm.
There are two steps required to attain certification of the converted UVISI sensor data.
The first step which is performed in pipeline processing is catching all anomalies related to
the UVISI instrument; the results of this step are stored in the DQI file. Step 2 involves
checking of both the DQI files and the Convert run time options. The DQI files determine
which image frames, if any, are not to be certified, whereas the run time options are
checked for decertification of the entire event.
The data from the point source files also go through a certification process. For each point
source, all the pixels in the background area used in the detection process are checked to
see if any are dead or saturated, near the edge of the field of view, or overlayed by the
fiber optic tapir covering the field of view. If any of these conditions exist, then the
software indicates in the output that the point source in question is not certified.
Point Processing
Point is designed to compute pointing information for each sensor’s boresight and of all the
point sources found within each imager’s field of view. It processes the converted sensor
data produced by Convert by converting pixel coordinates to inertial coordinates (i.e. right
ascension and declination in J2000 coordinates). The user is given the options of having
Point produce the right ascension and declination for the sensors’ boresights and/or each
pixel within the sensors’ images or the RA and DEC of the point sources detected in the
Convert process.
The pointing information is obtained using the attitude alignment data computed by our
MSX Attitude Processing Center (APC) along with several items embedded within the
calibration files such as distortion factors and azimuth and elevation angles. The algorithm
is the same for both the sensor boresight and point source inputs. After correcting for
distortion, the given pixel coordinates are converted to sensor coordinates and then to
spacecraft coordinates via the alignment correction. From here the earth centered inertial
(J2000) vector is calculated which is then used as input to compute the right ascension and
declination.
Products
Calibration files contain all the calibration factors used in converting the raw sensor data
into scientific units. A new data set is developed anytime the characteristics of any of the
sensors have changed or when a correction is in order. One goal of the UVISI DPC was to
be able to process all raw sensor data with the correct calibration data regardless of the
time in which that sensor data was recorded on the spacecraft. We therefore came up with
a mechanism for keeping track of all of the calibration data by appending new data sets to
the existing calibration files similar to that used for the operational envelope. Associated
with each data set is a time frame to which any sensor data can be matched up with. It then
becomes a simple task of skipping over calibration data sets until the proper time frame is
matched up to the time associated with the current sensor data.
UVISI calibration data includes gain, gate (accumulation time), aperture, field-of-view and
distortion factors. Uniformity and response factors make up 90% of the calibration files
and are based on the sensor's filter position. Given that these factors provide a one-to-one
correspondence with each pixel, only the relevant portion of this data is loaded into
memory when there is a change in filter position, saving memory. Since most experiments
do not require many filter changes, this approach optimized I/O and memory. One of the
goals of the DPC is to be flexible enough to accommodate all subscribed users of the DPC
Convert software. Therefore conserving memory is more of an issue than speed when
providing the ability to run software at more than one site under multiple platforms.
The DQI files are used to summarize and validate the state of the UVISI instrument. The
general health of the instrument and all anomalies are packed into a relatively small file.
Typically, a 20 minute file or 2 gigabytes worth of UVISI data can be compacted into
approximately 2 megabytes of DQI. Besides providing a detailed frame by frame view of
the UVISI instrument, the DQI files contain a file header which gives a summary of all
significant engineering failures. This can be useful to the engineering team as a brief
overview of an event. By looking at the file header first, they get a heads up on what to
look for when troubleshooting an anomaly.
Where the DQI files summarize the health of the UVISI instrument, the DMI files are used
to summarize the sensor data. These files provide snapshots of calibrated sensor data at 5
second time intervals. This spectrographic sensor data consists of statistical data such as
averages and standard deviations of spectral irradiances over specified spectral regions
while the imager data contains the first three moments of the pixel count distribution over 5
CCD regions along with a count of the number of pixels exceeding 5 standard deviations
of a region’s average count. The DMI files are slightly larger than the DQI files which
together provide the analyst a small but thorough summary of each event.
Analysis Tools
ALUICE has four modes of operation: 1) image viewing allows the user to view specific
data records as a psuedocolor image or surface plot display; 2) XY plotting allows the user
to plot any metadata variable found in the Convert output record headers against any other
variable, for example, time, alarm status, configuration status, attitude information, etc; 3)
variable extraction allows the user to extract portions of images or metadata and save them
as self-documenting IDL data structures; 4) toolbox functions allow the user to select
image or vector data for processing with simple toolbox functions.
In order to monitor the long term health and performance of the UVISI instruments, the
UVISI Performance Assessment Team designed a set of Data Certification (DC)
experiments. The data accumulated during each of these sets of experiments must be
processed routinely and compared to data from previous experiments throughout the 2 to 5
year MSX mission lifetime. Experiment plans are scheduled to repeat on a weekly, bi-
weekly or monthly schedule. Processing of the data from these specific experiments to
produce standard plots and to create reduced data bases eases the burden of data analysis.
Baseline software to support the production of standard plots was developed for the
following experiment plans: DC14 UVISI Internal Dark Signal Characterization
Experiment Plan, DC16 UVISI Star Calibration - Staring Mode, DC17 - UVISI Star
Calibration - Scanning Mode, and DC42 - Temperature and Voltage Characterization.
Because of the difficulties associated with testing this software without complete test data
sets, additional post launch work is required to complete the development of this set of
software tools. The generic set of tools developed for the ALUICE software package eases
this effort by providing a set of basic utilities which may be adapted for specific interactive
and non-interactive analysis efforts.
In order to aid the UVISI engineering team in their efforts to detect any problems that may
have an adverse effect on the quality of the UVISI instrument, a file containing detailed
frame-by-frame instrument and spacecraft housekeeping parameters is created by the DPC
pipeline for every data collection event. The Housekeeping User Interface (HUI) has been
developed for viewing the contents of this binary file. It is capable of running in a non-
interactive mode to create summaries of error conditions. The user can then view these
housekeeping files one frame at a time to graphically view the entire record and narrow
down the cause of each alarm found. XY plotting capabilities are currently being added, to
allow the user the ability to see at a glance the variations of alarm settings for the entire
data collection event timeline.
Summary
The UVISI DPC is successfully operating and processing all MSX UVISI data as was
originally designed after years of teamwork between the software developers and all the
scientific and engineering teams related to UVISI. All DPC software was certified for
functionality and correctness before the launch of the MSX satellite. We attribute this
success to a very small development team which enhanced strong communications. Also, it
cannot be emphasized enough the importance of our working side-by-side with the UVISI
engineering team in the checkout of the DCS hardware. Without that effort our software
development process would have taken much longer and may not have been as thorough in
accounting for all the subtleties of the hardware. We were fortunate to be in close
proximity to the scientific community as well. Since they are also the primary users of the
UVISI data, we were able to develop our software with flexibility to meet changing
requirements. Communication has once again proven to be the key ingredient to a
successful development process.
SPACE-BASED VISIBLE (SBV) SURVEILLANCE DATA
VERIFICATION AND TELEMETRY PROCESSING
ABSTRACT
This paper discusses the telemetry processing and data verification performed by the SBV
Processing, Operations and Control Center (SPOCC) located at MIT Lincoln Laboratory
(MIT LL). The SPOCC is unique among the Midcourse Space Experiment (MSX) Data
Processing Centers because it supports operational demonstrations of the SBV sensor for
Space-Based Space Surveillance applications. The surveillance experiment objectives
focus on tracking of resident space objects (RSOs), including acquisition of newly
launched satellites. Since Space Surveillance operations have fundamentally short
timelines, the SPOCC must be deeply involved in the mission planning for the series of
observations and must receive and process the resulting data quickly. In order to achieve
these objectives, the MSX Concept of Operations (CONOPS) has been developed to
include the SPOCC in the operations planning process. The SPOCC is responsible for
generating all MSX spacecraft command information required to execute space
surveillance events using the MSX. This operating agreement and a highly automated
planning system at the SPOCC allow the planning timeline objectives to be met. In
addition, the Space Surveillance experiment scenarios call for active use of the 1 Mbps
real-time link to transmit processed targets tracks from the SBV to the SPOCC for
processing and for short time-line response of the SPOCC to process the track of the new
object and produce new commands for the MSX spacecraft, or other space surveillance
sensors, to re-acquire the object. To accomplish this, surveillance data processed and
stored onboard the SBV is transmitted to the APL Mission Processing Center via 1 Mbps
contacts with the dedicated Applied Physics Laboratory (APL) station, or via one of the
AFSCN RTS locations, which forwards the telemetry in real-time to APL. The Mission
Processing facility at APL automatically processes the MSX telemetry to extract the SBV
allocation and forwards the data via file transfer over a dedicated fractional T1 link to the
SPOCC. The data arriving at the SPOCC is automatically identified and processed to yield
calibrated metric observations of RSOs. These results are then fed forward into the
mission planning process for follow-up observations.
In addition to the experiment support discussed above, the SPOCC monitors and stores
SBV housekeeping data, monitors payload health and status, and supports diagnosis and
correction. There are also software tools which support the assessment of the results of
surveillance experiments and to produce a number of products used by the SBV instrument
team to assess the overall performance characteristics of the SBV instrument.
KEY WORDS
OVERVIEW
MSX will be launched from Vandenberg Air Force Base into a near-polar, low-earth, near
sun-synchronous orbit. The MSX, discussed further in Ref. 1, consists of the satellite
superstructure, three primary optical sensors, contamination instrumentation and the
spacecraft support subsystems. The optical axes of the three primary sensors (Space
Infrared Imaging Telescope (SPIRIT III), Space-Based Visible (SBV) sensor, and
Ultraviolet/Visible Imagers and Spectrographic Imagers (UVISI)) are parallel to one
another and point in the +X direction.
The SBV sensor, discussed in Refs. 2 and 3, is the primary visible wavelength sensor
aboard MSX. It will be used to collect data on target signatures and background
phenomenologies, but the primary mission of SBV will be to conduct functional
demonstrations of space-based space surveillance. SBV incorporates a 15 cm, off-axis,
all-reflective, reimaging telescope with a thermoelectrically-cooled CCD focal plane array.
SBV also includes an image processing system, experiment control system, telemetry
formatter, and a RAM data buffer for temporary data storage.
MSX SPACE SURVEILLANCE OPERATIONS
Currently the United States maintains a world wide network of ground based sensors
tasked with the acquisition of tracking data on all manmade objects in orbit around the
earth. These sensors include a network of passive optical systems which are limited to a
short duty cycle by poor weather and by daylight. Since foreign based sites are
progressively more expensive and inconvenient to support, it is natural to ask whether
ground based sensors could be supplemented or replaced by satellite based sensing
systems. Satellite based sensors are not limited by daylight operation or poor weather and
a single satellite borne sensor can sample the entire geosynchronous belt satellite
population several times per day.
One of the missions of the MSX satellite is to demonstrate the feasibility of these space-
based space surveillance operations. The mission planning for the Space Surveillance
experiments on the MSX satellite requires the ability to leave considerable flexibility in the
experiment timing and attitude profile to be followed by the MSX in the experiment
execution until late in the experiment planning process. Under “normal” circumstances the
details of the operation, consisting of the list of satellites to be observed, the attitude
profile for the MSX and the data acquisition times can be defined one to two days before
the execution on the MSX. Special “quick reaction events”, such as acquiring track data
on a newly launched satellite in its transfer orbit to the geosynchronous belt, require
reaction times on the order of hours. More details on the concept of operations which
allows space surveillance operation to be planned and executed on the MSX satellite is
provided in Ref 4. These experiments also require that the observational data be quickly
down-linked, processed and the results be available to feed forward into the next series of
space surveillance observations to be executed by the satellite.
The MSX control network used to execute MSX space surveillance operations is shown in
Figure 1. The SPOCC, located at Lincoln Laboratory, is connected to the MSX Mission
Operations Center, located at Applied Physics Laboratory (APL), via a dedicated
fractional T1 link. The SPOCC develops the commands needed to execute the space
surveillance operations, including all instrument and bus commands, and forwards them to
APL for inclusion in the command uploads. Command uploads may be sent to the MSX
via a dedicated station at APL or via the AFSCN. One Mbps telemetry is returned to APL
via the AFSCN stations or is received at the dedicated APL station. Generally the
CONOPS of the SBV calls for surveillance data to be processed onboard to yield
detections of moving objects in the field and the results stored in an RAM buffer for later
down link. The data are down-linked via the 1 Mbps link, processed at APL to extract the
SBV allocation of the telemetry stream and the results automatically forwarded to the
SPOCC.
SPOCC ARCHITECTURE OVERVIEW
The top level architecture chosen for the SPOCC is shown in Figure 2. The flow of
operations starts in the upper left with the receipt of the experiment requirements. The
Mission planning process is an iterative process between the SPOCC and the Mission
Operations Center (MOC), as described in Ref. 4, and results in the generation of the
commands needed to execute the space surveillance event on the MSX. The commands
are provided to the MOC, over the dedicated link between the facilities, for incorporation
into the MSX command uploads. The resulting telemetry is provided to the SPOCC via file
transfers across the same dedicated link shortly after the completion of any contact
between the MSX and a ground station. The files received at the SPOCC may contain
1 Mbps or 16 Kbps telemetry or a broad range of other planning and operations data. The
files are automatically identified, decommutated and processed as described in the
following section. The resulting data are forwarded to the SBV health and status
monitoring function or the science data processing pipeline. The science data processing
pipeline automatically applies the appropriate calibrations and processes and the
observational data to yield calibrated metric and radiometric observations of resident space
objects. The processing includes the determination of spacecraft attitude from the
astrometric information contained in the observations.
As discussed above, the telemetry files downloaded from the MSX spacecraft are sent
from APL to the SPOCC via a dedicated fractional T1 link. The traffic on the link also
includes many other types of files, including ancillary mission planning products, such as
the predicted MSX orbital geometry data and MSX operations schedules. Thus, the
SPOCC ingest processing must receive and appropriately process a number of products in
an automated fashion. As each file is received, a data ingest script checks the received data
file's extension and launches the appropriate processing. The downloaded telemetry files
are routed to the decommutation software and the ancillary files are sent to various
processes or copied to destination directories. This process is shown pictorially in
Figure 3.
During a surveillance experiment, the SBV collects data sets, composed of a series of
between 2 to 16 frames of CCD data, while pointing in an inertially invariant direction.
Each “frame set” is processed by the SBV's onboard Signal Processor. The Signal
Processor extracts star data, seen as fixed point source objects, and target data, each seen
as a streaking point source. The positions and signatures of the desired number of stars are
stored in the SBV's internal RAM buffer. During ground processing, these stars will be
automatically matched against a star catalog and a fit accomplished to precisely determine
the pointing of the SBV. Information on each streaking Resident Space Objects (RSOs)
detected is stored in a report containing the locations of the endpoints of the streak, and
the radiometric values associated with a swath of pixels along the path of the streak.
Several other reports are also stored in the buffer. Observation header (OBSHDR) reports
store information regarding the SBV's configuration and time of frame set collection.
Snapshots of SBV health and status data are stored as housekeeping (HSKP) reports.
Reports from a Single Event Upset (SEU) experiment and a Total Dosimetry (TD)
experiment are also recorded periodically. After the data collection event has been
completed, the contents of the SBV RAM buffer are downloaded via the 1 Mbps link
during a contact, generating a prime science file.
During ground contacts, health and status telemetry are also downloaded via the 16 Kbps
link, generating a health and status file of all the MSX systems and instruments. The SBV
has enough allocation to provide only a snapshot of the SBV health and status once per
second. The full health and status report is contained in a subcommutated portion of the
telemetry frame which takes 22 seconds to complete a single cycle.
Both prime science and health and status files are sent to the same decommutator software
but they are processed differently, as shown in Figure 3. For health and status files, the
decommutator strips out the SBV health and status data. These data are stored in a health
and status database where they can be viewed and processed for long term trending. The
data are also processed by real-time software which monitors the SBV health and status,
looking for anomalies and malfunctions. This software is relatively complex due to the
subcommutated nature of the health and status telemetry stream. The monitoring software
generates different levels of alarms for different anomalies, the most urgent of which result
in automatic paging of key personnel for anomaly resolution. The automated notification of
alarm conditions is of great importance because due to funding limitations, the SPOCC is
staffed only 8 hours/day, five days per week.
Prime science files are processed by the decommutator to extract the star, streak, and other
reports. The star and streak data will be processed by the data reduction software to
generate metric observations. The data reduction software requires a command file
describing the data to be processed. This command file is automatically generated by the
decommutator, based on the contents of the telemetry file, to process each set of star and
streak data from a particular frame set. The star and streak data themselves are extracted
from their reports and loaded into a prime science database. SEU and TD reports are
processed and stored in data files associated with radiation experiments and the snapshots
of health and status in the HSKP reports are added to the health and status database.
Figure 4 shows the data reduction pipeline process which is applied to each frame set of
data. The data reduction software takes as input the command file generated by the
decommutator, and accesses the prime science data base to extract the stars and streak
data along with an estimate the SBV attitude based on the MSX's attitude system output,
which is stored in the OBSHDR data. The data reduction process begins by refining the
knowledge of the attitude of the SBV using a known star catalog as a reference. Candidate
reference stars are extracted form the catalog and transformed into focal plane coordinates
by applying a distortion map. The distortion of the SBV optical system is large due to the
off-axis design which is optimized for stray light rejection. The star detections are matched
with the cataloged candidates on the list, and a fit run to establish the precise (better than
an arcsec in most cases) attitude of the SBV during the frame set. This information is
output to the attitude history file.
The precise attitude is used to calculate the Right Ascension and Declination of the
endpoints of each streak in the frame set, whose signature is used to refine the endpoint
locations. The endpoint positions are then checked against the catalog of known resident
space objects in an attempt to identify known and unknown objects. These endpoint
positions and their times are then written out as metric observations. Data quality indices
are calculated for each frame set based on factors such as the residuals from the star
match.
SUMMARY
This paper has described the mission planning and telemetry processing system developed
to support the execution of space surveillance operations on the MSX satellite. The system
is highly automated to support the short timelines inherent in the mission and is closely
integrated with the MSX Mission Operations Center which operates the MSX satellite.
The SPOCC telemetry processing system is also constructed as an integral part of the
SPOCC data management system so that incoming telemetry files are immediately
processed, distributed to the appropriate data base and the user notified. In addition, the
receipt of specific types of data automatically initiate additional levels of processing to
generate derived products. This level of automation allows the operations of the SPOCC
on a timeline suitable to the mission and with a minimum of operations personnel.
ACKNOWLEDGMENT
MIT Lincoln Laboratory is the developer of the SBV and the operator of the SPOCC
under Department of the Air Force Contract F19628-95-C-0002.
REFERENCES
1
Mill, J.D., O’Neil, R.R., Price, S., Romick, G. J., Uy, O.M., Gaposchkin, E.M., Light,
G.C., Moore, W.W., Murdock, T.L., and Stair A.T., ?Midcourse Space Experiment:
Introduction to the Spacecraft, Instruments, and Scientific Objectives,” Journal of
Spacecraft and Rockets, Vol. 31, No. 5, Sept.-Oct, 1994, pp. 900-907.
2
Dyjak, C, and Harrison, D.C., “Space-Based Visible Surveillance Experiment,” SPIE
Proceedings, Vol 1479, Surveillance Technologies, Apr. 1991, pp. 42-56.
3
Harrison, D.C., and Chow, J.C., “Space-Based Visible Sensor on MSX Satellite”, SPIE
Proceedings, Vol 2217, Aerial Surveillance Sensing Including Obscured and Underground
Object Detection, Apr. 1994, pp. 377-387.
4
Stokes, G.H., and Good A.C., “Joint Operations Planning for the Midcourse Space
Experiment Satellite,” Journal of Spacecraft and Rockets, Vol. 32, No. 5, Sept.-Oct,
1995, pp. 812
Figure 1. MSX Control Network
ABSTRACT
A system was developed using capabilities from the Range Applications Joint Program
Office (RAJPO) GPS tracking system and the ACMI Interface System (ACINTS) to
provide tracking data and visual cues to experimenters. The Mobile Advanced Range Data
System (ARDS) Control System (MACS) outputs are used to provide research data in
support of advanced project studies. Enhanced from a previous system, the MACS
expands system capabilities to allow researchers to locate where Digital Terrain Elevation
Data (DTED) is available for incorporation into a reference data base.
The System Integration Group at Veda Incorporated has been supporting Wright
Laboratories in the ground-based tracking and targeting arena since 1989 with the design,
development, and integration of four generations of real-time, telemetry-based tracking
aids. Commencing in Q3 1995, Veda began developing a mobile, transportable system
based on the RAJPO GPS tracking system. The resulting system architecture takes
advantage of the front end processor (FEP) used in the three previous generations of
interface systems built for Wright Laboratories, thus maximizing hardware and software
reuse. The FEP provides a computational interface between the GPS tracking system and
the display (operator) system.
The end product is a powerful, flexible, fully mobile testbed supporting RDT&E
requirements for Wright Laboratories, as well as to other U.S. and foreign research
organizations. The system is rapidly reconfigurable to accommodate ground-based
tracking systems as well as GPS-based systems, and its capabilities can be extended to
include support for mission planning tools, insertion of virtual participants such as DIS
entities, and detailed post-mission analysis.
KEY WORDS
INTRODUCTION
Advanced airborne technology research at Wright Laboratories has been a beneficial going
concern for several years. Early in the program, researchers recognized a need to
maximize control over as many variables as possible during research activities. Ground-
based multilateration systems such as the Air Combat Maneuvering Instrumentation
(ACMI) and Red Flag Measurement and Debriefing System (RFMDS) have provided
reliable data to support these tests; however, they limit the geographical area available for
testing because they require system hardware that is located at permanent installations.
Capitalizing on the freedom achieved via GPS/DGPS systems, research, development, test
and evaluation (RDT&E) organizations can locate away from ground-based systems and
locate their operations at a multitude of suitable locations. GPS/DGPS based systems also
provide an opportunity to use the system for activities other than pure RDT&E. They may
be used to provide tools for mission planning, performance monitoring, fault diagnosis,
DIS and other virtual reality applications. Mission rehearsals and predictions can take
place in a virtual environment and synthesized results stored in data bases for recall and
detailed analysis. Once completed, data from actual missions can be merged with the
synthetic data to provide a more realistic picture of how combat entities behave in different
environments.
SYSTEM ARCHITECTURE
Mobility is achieved by installing the system into a fully mobile, self-contained vehicle
suitable for transport, setup, installation, and operation in the field. The vehicle selected for
this application was the result of several months of specification development, vendor pre-
selection (on-site) visits, and release of requests for proposals. Barth Specialized Vehicles
was selected as the vehicle manufacturer to house the GPS tracking system and other
tracking interface equipment. Growth was a contributing factor in the selection - the
vehicle needed to be sufficiently flexible to allow other research activities well past the
maturity point of the current program. The resulting subsystem is termed the Mobile
ARDS Control System (MACS).
Since the on-range GPS ground station is likely to be located remotely from the aircraft
staging area, a second vehicle is needed to support the airborne instrumentation
subsystems. These systems consist of the High Dynamic Instrumentation Set (HDIS)
attached to the aircraft via commonly used missile “pod” attachment techniques. The
vehicle for staging activities near the aircraft needs to support pod storage, testing,
reconfiguration, and maintenance activities. The ARDS Pod Operation and Maintenance
(APOM) facility is towed by a heavy duty tandem pickup which doubles as local
transportation during deployments (APOM is not discussed in detail in this paper).
Monitor Sensor
HDIS and Control Suite
Display
Sensor
Front-end Suite
Enhanced
Processor Sensor
Graphics
Display Suite
ARDS Sensor
Ground Ethernet
Suite
Station
MACS
Sensor
Control X T ERM Range Record
Suite
Init.
Graphical Menu Session
Display Bar W indow Init. Svc. Req. Terminate
Svc. Req.
Terminate
DLP
Interface
Graphics
Proc. I/F
Figure 2. The MACS Software reuse scheme reduces development time, errors, and
controls costs. Modified CSUs are shown in reverse colorization (CSUs shown
represent actual units with titles removed for diagram clarity).
New hardware and software was added to the system to achieve the GPS tracking
capability leading to the system’s mobility. The GPS tracking system (ground components)
consists of a Reference Receiver/Processor, Master Remote Ground Station, Data Link
System Processor, and Host Range Interface Processor - see reference 1. The graphical
interface and mission monitoring is the most significant of the MACS enhancements. It
accommodates the separation from reliance ground-based multilateration systems and
provides critical mission-specific data to the primary control function. The hardware
platform consists of a Silicon Graphics Indigo2 High Impact graphics processor with 250
MHz processor, 32 Mbytes of RAM, and enhanced graphics engine. This processor is
responsible for all terrain database development and display processing for pre-mission
setup and real-time mission monitoring. The database application program consists of a
Coryphaeus toolset (Easy-T, Designer’s Workbench, and Easy Scene, all registered
trademarks of Coryphaeus Software Incorporated).
Control Interface
The MACS control interface function oversees incoming tracking data, recording, aircraft
selection, and bus access arbitration via interfaces to the range, display/user, recording,
and sensor suite handing routines. The control function maintains the MACS and sensor
suite calibration data via central configuration files, accessed during system setup,
operation, calibration, and modification. The control function prevents out-of-bounds
operations on parameters passed between the various MACS tasks, including erroneous
user inputs such as entry of real numbers into integer fields during calibration, or entering
out-of-bounds calibration data.
Range Interface
The range interface function establishes a communication protocol with the GPS tracking
system. Tracking data are passed to the MACS via ethernet. Basic tracking data are
displayed on the MACS user console as a reference for test vehicle selection during data
collection and test observation. The range interface function performs validity checks on
incoming data to ensure data integrity. This function is the most affected by insertion of the
new GPS tracking system, and must accommodate an entirely new set of tracking
parameters and system status checks.
The range interface function is the most critical due to the impact of the GPS tracking
system on the data required to derive geometric relationships between the GPS tracked
target and the sensor suite or other data collection devices. Tracking data is extracted from
various messages, and system health data is extracted from other embedded messages to
determine the quality of received data. All range data is made available to other system
CSCs via shared memory.
Sensor Suite Interface
The sensor suite interface function oversees the data communication between the MACS,
the GPS tracking system, and sensor suite tracker/controller processors. Once incoming
tracking data are received, the sensor suite interface function resolves the spatial
relationship between the sensor suite and the on-range aircraft. These data are formatted
and passed to the sensor suite tracker port via an RS-232 interface. The data path is
simplex, requiring no acknowledgment of message receipt, freeing the sensor suite
processor to perform independent data processing tasks.
XTERM Interface
The XTERM interface function provides the user access to the control function to establish
operating parameters for a given mission. Two display modes are inherent in the
MACS: 1) primary control and 2) graphical interface and mission monitoring. In the
primary control mode, the display/user interface provides a control dialogue to MACS
users. System-level control and monitoring functions are performed through an X-
terminal/OSF Motif environment. A 900 square mile God's eye view of the range area is
generated to provide a reference to establishing the GPS reference position and the
relevant Defense Mapping Agency (DMA) data used to observe on-range aircraft positions
relative to sensor suites. A series of pulldown menus and interactive dialogues provide the
user with selection and control of MACS functions. System health checks are performed
by the host. Status changes or error conditions are transmitted across the local area
network to the user terminal for further operator action or acknowledgment. All essential
control functions are executed at the main display, and include system setup and shutdown,
participant (test article) definition, test article selection and deselection, tabular data
display controls, and tracking kinematics display control.
OPERATIONAL CONSIDERATIONS
To initiate a mission scenario, the user logs on to the system and is presented with the
main control menus. The user must define a local range area based on a reference latitude
and longitude in close proximity to the area where the mission will occur. This actions
spawns a task to derive a local tangent plane coordinate system from the inputs, and after
establishing the geographical area the system loads the terrain database for the area of
consideration. The user further defines the mission profile including vehicle type(s) to be
tracked, HDIS ID, and other tracking configuration data. Once these data are entered, the
mission will be initiated automatically by the GPS tracking system. When real-time data
begins to be passed to the system, the MACS processes and displays tracked target data
on the appropriate screens. At this time the user can manipulate the displays and data
routing to drive sensor suites or provide live or virtual inputs to a device under test. All
data can be recorded via SVHS tape (including audio and other aural cues) for post-
mission review and analysis. High accuracy digital tracking data can be recorded via the
GPS tracking system recording function. Differential GPS is used (either method 1 or
method 2 corrections are available via the DLSP) to maintain tracking accuracy and
consistency among mission participants.
DEPLOYMENT SCHEDULE
The MACS is scheduled for initial deployment to support an exercise in the Southeast
United States early Q3 1996, with final acceptance testing and full operational capability
to be achieved late Q3 1996. After acceptance, the MACS will be used to support a
variety of advanced sensor system concept studies and will be available for reconfiguration
as required to support expanded research activities.
CONCLUSION
REFERENCES
1. Birnbaum, M., Quick, R.F., Gilhousen, K.S., and Blanda, J., “Range Applications Joint
Program Office GPS Range System Data Link,” Proceedings of the ION GPS ‘89 - 2nd
International Technical Meeting of the Satellite Division of the Institute of Navigation,
Colorado Springs, CO, 1989, pp. 103-108.
2. Roberts, I.P., and Hancock, T.P., “GPS Test Range Mission Planning,” Proceedings of
the International Telemetry Conference, Las Vegas, NV, October 1990, pp. 783-789.
3. Smyth, P., Chauvin, T., Oliver, G., and Statman, J., “Network Management Tools for a
GPS Datalink Network,” Proceedings of the 1991 International Telemetry Conference,
San Diego, CA, October 1991, pp. 611-618
4. Wallace, K., and Weinberg, LT. P., “Closed Loop Tracking System Provides Reference
for Data Collection Exercises,” Proceedings of the International Telemetry Conference,
San Diego, CA, 1992, pp. 613-621.
INTEGRATED TESTING AND TRAINING INSTRUMENTATION,
A REALITY
AUTHORS:
INTRODUCTION
Historically, requirements for instrumentation that supports testing and training have
diverged, for a variety of reasons. In general, testing evaluates how well the system or
product meets stated operational or contractual requirements, while training evaluates how
well the user operates the system in the battlefield environment. Developmental testing
evaluates specific system performance characteristics, both to ensure that requirements are
met and to establish limits of the capability under development. For these applications, a high
degree of instrumentation accuracy is usually required.
• Why are different instrumentation assets needed for developmental and operational
testing?
• Why are different accuracies required? Why can’t these activities be merged?
• Why can’t common instrumentation and facilities be used in testing and training
exercises?
• Why can’t we create more realistic testing and training battlefield environments?
• Why can’t the Services cooperate more fully in joint testing and training, since we
fight together? How can we involve our allies in effective mutual testing and training?
• Why do recent procurements by our allies reflect the desire not to purchase our
systems?
• How can we convince our allies that it is cost effective to use the same instrumentation
we use?
• If we cannot standardize and control the cost of our testing and training
instrumentation, how can we expect our allies to follow our lead?
• Why do we lack the funds to test our products sufficiently before they enter the
inventory, yet come up with enough to fix them after they are deployed?
Fortunately, several new technologies are providing cost–effective answers to all of these
concerns. The Global Positioning System (GPS), modern data link networking and control
techniques, internetting, and modern simulation technology are all finding applications in
testing and training. These technologies are forcing a major cultural change within the DoD
that will revolutionize how we conduct future testing and training. This paper focuses on how
the use of GPS and modern data links will enable far greater integration of test and training
activities.
The DoD conducted a study in the early 1980s that concluded that GPS was the
appropriate positioning and timing instrumentation source for future test and training
applications. The tri–service GPS Range Applications Joint Program Office (RAJPO) was
formed to develop such instrumentation for use on all military test ranges. This
instrumentation now provides accurate Time–Space Position Information (TSPI) without
interfering with or increasing the cost of the system under test.
It was a goal of the RAJPO to develop a GPS–based TSPI system that did not interfere
with the established range operating procedures and analysis techniques. It was decided by
the RAJPO and the range interface control working groups that the RAJPO TSPI outputs will
meet range approved interface specifications. Future modifications to the system will not alter
the approved interface formats. In summary, the RAJPO system does not include any
sophisticated display capability, since that activity is the responsibility of the range receiving
the system.
The assets developed by the RAJPO have been, or are in the process of being, installed at
all DoD Test and Evaluation (T&E) sites. The Air Force Operational Test and Evaluation
Center (AFOTEC) is responsible for all Operational T&E (OT&E). Many of the OT&E
support activities use training–like scenarios and systems to accomplish their missions.
AFOTEC is using RAJPO assets and integrating them with existing training assets to employ
a capability that uses the best of each of these systems. It is a logical next step to review their
experiences and plans of AFOTEC as we consider an integrated testing and training
capability.
Ground installations include a data link network controller and processor, a differential
GPS receiver and processor, and one or more data link transceivers and antennas, either
collocated with these components or connected to them through range communications
resources. This equipment can be fixed or mounted in a trailer. Use of multiple remote
transceivers assures the desired levels of coverage and link reliability for any mission
scenario. Associated system software analyzes topography and mission profiles to determine
optimal locations for remote transceiver stations.
A less expensive C/A–code GPS receiver that can operate with this range system is
available for use on low–dynamic platforms such as tracked and wheeled vehicles, ships and
some rotary–winged aircraft. Thus, a full set of system components are available that
accommodates all types of participants.
Figure 1 RAJPO GPS Receiver-based Assets
The system can operate in either real–time or passive modes. The real–time mode uses
data link communications to acquire the TSPI and associated mission data as it is generated.
In the passive mode, all data is recorded in the platform memory system, for later analysis.
The present configuration permits recording of up to six hours of mission data. As the density
of commercially–available PCMCIA solid–state memory modules increases, the amount of
data recorded and the length of the mission can also increase. Recording can also take place
while operating in the real–time mode, as backup in case of loss of data link connectivity.
GPS Operation
Table 1 1994 Flight Mission Error Statistics (F–15 and F–16 Aircraft, Various High G Maneuvers)
ITEM ABSOLUTE ABSOLUTE DIFFERENTIAL DIFFERENTIAL
UNAIDED AIDED UNAIDED AIDED
Spec Average Spec Average Spec Average Spec Average
Horizontal Position (ft) 26 20.2 23 8.9 14 8.2 8 2.3
Vertical Position (ft) 43 31.3 38 15.6 23 10.3 13 3.7
Horizontal Velocity 20 7.9 1.7 0.7 20 5.6 1.7 0.41
(ft/sec)
Vertical Velocity (ft/sec) 34 4.8 2.7 0.7 34 4.5 2.7 0.4
Members of the test range community aided the RAJPO in the procurement of a new
communications subsystem, referred to as the RAJPO Data Link System (DLS), for
communicating platform–derived TSPI data to ground recording facilities. This data link is
optimized for the transmission of GPS and platform data, but it is widely recognized to be
well–suited to a variety of additional range–related applications. A single DLS network
employs Time Division Multiple–Access (TDMA) techniques, and can accommodate up to
Figure 2 FF-15 Flight Test, Method 1 vs. Method 3 Differential Results
2000 participants. As many as 250 timeslots are available to participants each second; thus,
up to 25 participants can report their position at a ten message–per–second rate.
Alternatively, 2000 participants can report their position once every eight seconds.
L–Band operation with current power output and antenna characteristics ensures high–
reliability communications with maneuvering aircraft at ranges up to approximately 80 miles.
During typical operations with high–altitude aircraft in more conventional attitudes, we
regularly observe reliable message reception at ranges beyond 150 miles.
Normally, the base station assigns each participant the appropriate set of timeslots for
reporting. When a message is missed, the base station can be programmed to automatically
request its retransmission. Alternatively, polling can be employed, in which each participant
responds whenever it receives a request message from the command center.
During the mission, each data link transceiver (DLT) continually monitors its ability to
communicate with the base station, either directly or by relay through another airborne or
ground–remote transceiver. Whenever it must report, the participant DLT automatically
selects the most efficient routing of its message to the base station, without operator
intervention. This capability greatly extends the range of the system, while assuring high
message reliability. Duplicate correctly–received messages are suppressed, and if multiple
versions of a message all contain errors, they are combined and the error–correction process
is repeated on the composite message.
Operations involving multiple ranges can be readily accommodated, since each range is
assigned unique frequency channels and timeslots. A participant leaving one range and
entering another is automatically assigned a new frequency and timeslot sequence. If a
participant DLT does not properly transfer to the new frequency, it automatically tunes to a
default frequency. From this default frequency, the DLT can be automatically commanded to
the proper frequency and timeslot to assure operation in the new range.
DLS Performance
Acceptance testing verified high message integrity and reliability between aircraft at
relative velocities up to Mach 2.5, over a wide range of altitudes and relay configurations,
and over water (where multipath propagation could potentially affect data link performance).
Much of the GPS performance validation process reported in Figure 2 also confirmed the
ability of the DLS to perform properly throughout the test period.
The RAJPO recently evaluated the capability of the GPS/inertial suite in the
TACTS/ACMI1 training environment. Due to budget limitations, the test was accomplished
by installing a RAJPO pod and a standard P–4 training pod on the same aircraft. GPS/inertial
TSPI data from the RAJPO pod was passed to the training pod through an umbilical cable
assembly, and incorporated into the ACMI pod multilateration transmission for relay to
ACMI ground equipment. On the ground, the GPS/inertial solution was extracted from the
multilateration signal, and used to drive the ACMI displays.
The experiment was an unqualified success. When the aircraft left Eglin Air Force Base
en route to the Gulfport National Guard ACMI installation, GPS/inertial data provided
position information to ACMI displays whenever any multilateration signal path from the
aircraft was available. Whenever enough paths were available to allow successful
multilateration position determination, the multilateration–derived position was superimposed
on the display with the GPS–derived position. There was no perceived difference in the
presentation quality of the two positioning techniques. The experiment was repeated
successfully at the Nellis Air Force Base ACMI range. This experiment demonstrated that a
GPS/inertial package added to a TACTS/ACMI pod can eliminate many of the shortcomings
of a multilateration positioning system:
• The area of coverage is extended to all points where one or more Tracking
Instrumentation Subsystem (TIS) tower(s) can receive a signal from a participant
• Position, velocity and time are available at any altitude, down to ground level without
the use of a pod radar altimeter
• The passive nature of the GPS/inertial solution reduces the restriction in the number of
participants on the range.
1
Tactical Air Crew Combat Training System/Air Combat Maneuvering Instrumentation System
Users of the Navy Range, Responder, Reporting System (R3) have purchased RAJPO
pods that were modified to use the VHF R3 transponder in place of the DLT. This
configuration uses the R3 as a communication link only, thus eliminating the shortcomings
associated with R3 multilateration (as discussed above), while allowing retention of the R3
system ground infrastructure.
Training ranges require the accuracy provided by the RAJPO system to support activities
such as low level flying, no–release bomb drop scoring, precise missile flyout simulations,
damage assessment profiling, and real–time computerized steering of Electronic Warfare
threat emitters. Analysis may show that for many such scenarios, GPS differential correction
may not be required, thus simplifying the positioning process. The accuracy available with
the RAJPO TSPI package and the versatility of the DLS make it possible and practical to
consider the same instrumentation for developmental and operational testing, as well as
training. The use of differential corrections for test applications where high accuracy is
required, and non–differential techniques for training (where high accuracy may not be
required) is an example of this versatility.
• Strengths • Strengths
> TSPI > User Interface
> Data Link > Displays
> Flexibility > Aircraft Interfaces
• Weaknesses • Weaknesses
> User Interface > TSPI
> Display > Data Link
> Knowledge of > Flexibility
Aircraft Interfaces
• A tested and validated data link optimized for use with GPS as the positioning and
timing source
• A data link with excellent coverage and adaptability for operation with high and low
dynamic platforms
• An Advanced Digital Interface Unit (ADIU) that will be installed in a pod and operate
as an “instrumentation personal computer in the sky.” The ADIU will provide a
modern P–4B interface capability to the 1553 Buss of the aircraft.
• An Intelligent Flash Solid State Recorder (IFSSR) using commercial off the shelf
(COTS) hardware that will record data limited only by the availability of industrial
quality PCMCIA memory modules/cards.
• An Encryption module that will permit over–the–air keying. This will permit uplink
and downlink of encrypted data, up to the Top Secret level.
The important characteristics of the TACTS/ACMI systems that support testing and
training missions are as follows:
• An Advanced Display Debriefing System (ADDS) far advanced than current systems
used on test ranges. The system is an excellent tool for real–time and post flight system
analysis
• The Computational Control System (CCS) has the computing capability for hosting
pairing algorithms and flyout simulations, as well as supporting damage assessment, no
bomb drop scoring and steering of computer controlled threat simulators
When a common system is used for testing and training, one can consider integrating
these activities, to produce a clearer assessment of the performance of the system and the
operators. This should lead to more precise determination of both the equipment’s utility and
the performance of the operators, enhancing our ability to improve both equipment and
operators.
Equipment miniaturization makes it possible to build a common test and training pod.
Adding GPS/inertial positioning capabilities to training pods will allow training ranges to
obtain major increases in coverage and capacity, at minimal costs. Existing ranges need not
wait for systems presently under development to become available to gain the advantages of
GPS use. We have demonstrated that it is a relatively minor change to modify the ACMI
CCS/DDS2 to use GPS data for positioning and the TACTS/ACMI multilateration system as
a data link.
There is a major infrastructure that supports existing TACTS/ACMI systems. More than
1,000 pods are used on these ranges. It is naive to recommend a plan that will obsolete these
systems overnight. To start the evolutionary conversion of training systems to a GPS
positioning scheme, an upgrade kit could add the GPS/inertial capability to all training
pods that pass through a depot center for repair, or overhaul. New pods would have
GPS/inertial capability installed. These pods would use the GPS/inertial suite for positioning,
and the multilateration system for communications. Conventional TACTS/ACMI pods would
operate on the range using the multilateration system in the normal manner for positioning
and communications. This process will permit both conventional TACTS/ACMI pods and
GPS–capable TACTS/ACMI pods to operate on the same range. Aircraft with GPS
TACTS/ACMI pods will have much more capability, but the transition process is simplified
by the ability to commingle pods.
As training ranges are funded for upgrading, the RAJPO DLS ground electronics can be
added to the Command Center. This will allow use of the same data link (the RAJPO DLS)
for training that is currently used for testing. Platform position information received by the
DLS system can be passed to the CCS/DDS, in a manner completely transparent to the user.
These ranges are now capable of supporting RAJPO pods, GPS–equipped TACTS/ACMI
training pods, and standard TACTS/ACMI training pods. We could start now to use the same
environment for testing and training.
2
Command Control Subsystem/Data Display Subsystem
As present TACTS/ACMI pods become obsolete, or if range maintenance costs of the
TIS and CCS environments become prohibitive, ranges could terminate the multilateration
function. This would eliminate a major range operational and maintenance expense.
Summary
The RAJPO GPS/inertial suite and the DLS were developed in response to range–
generated requirements. To date, they are the only systems of their kind that have been
validated on a DoD calibrated test range. On test ranges, the capabilities of these systems are
gaining in maturity. A Next Generation Target Control System (NGTCS) will use GPS and
DLS variants for the control of full and sub–scale drones.
The DLS system was developed for use on test ranges. The DLS is optimized for the
collection of GPS and platform data from high, medium and low dynamic platforms. It is
presently in use, or being installed, at all DoD test ranges. Adding the maturing DLS to the
TACTS/ACMI environment can be accomplished in an evolutionary manner. This is a major
step in fulfilling the requirement to achieve integrated testing and training facilities.
In order to support future OT&E missions, AFOTEC is moving forward with the
integration of RAJPO and ACMI assets. AFOTEC has the responsibility to support future
operational test activities for programs such as the F–22, B1–B, B–2, CV–22, AIM–9X, JSF,
JASSM, and SAP’s. Class 2 modifications are in process for platforms to accept RAJPO
pods and plates.
We have described an approach that will allow addition of GPS positioning into the
TACTS/ACMI environment without obsoleting the existing training infrastructure. It has
been demonstrated that a GPS/inertial suite can be added to a TACTS/ACMI pod, and the
CCS/DDS can be easily modified to accept this data. When the range operates in this mode,
the TACTS/ACMI multilateration positioning system is used as a data link. Operation in this
mode mitigates range coverage, and link capacity limitations imposed by a multilateration
positioning system.
The approach proposed here, or some modification thereof, is necessary to permit testing
and training activities to be combined at existing training range facilities. An analysis of
functions supported by a test range may indicate that it is desirable to expand the capabilities
of that range to support training functions. Nothing included here precludes this action.
Reductions in overall operating costs will result by integrating testing and training
activities. The manner in which testing and training will be performed will be considered in
an entirely different manner.
Internationally, test ranges in Germany and the United Kingdom are already committed to
use variants of the RAJPO system. Other test ranges are contemplating this approach. Many
foreign governments are seriously considering new training ranges that will use GPS as the
positioning and timing source. However, some countries have elected to purchase training
systems independent of the DoD. Influencing the selection of DoD–developed systems
requires:
The ability of our allies to use the same systems that we use for testing will permit us to
better evaluate their system development and implementation techniques, as well as to
support concurrent development efforts.
Our allies’ ability to use the same systems that we use for training will enable us to
perform coordinated multinational training in either real, synthetic or combined environments.
The experiences in the Persian Gulf demonstrated that future encounters with aggressor
forces will likely involve multi–national allied forces. The ability to test and train those forces
in the real or a synthetic environment will better prepare ourselves, and our allies, for these
encounters.
GPS AND A MODERN DATA LINK SUPPORTING
THE SYNTHETIC BATTLEFIELD
ABSTRACT: This paper describes the use of GPS, a data communication network, and
modern simulation techniques to create a synthetic battlefield for testing and training
applications. It will discuss recent experiments conducted by the DoD to evaluate this
approach.
INTRODUCTION
For many years, the Department of Defense (DoD) has used simulation systems to
augment limited scale testing and training activities. However, it has used simulation
techniques, only sparingly, to assess the performance of systems and personnel operating in
large scale battlefield environments. The primary reasons that simulation technology does
not support the battlefield environment are the inability to obtain sufficient modeling
fidelity, the lack of a precise timing source to coordinate the various functions, and the lack
of a common coordinate system for all the real and simulated participants. One must
appreciate that all functions in the virtual battlefield need to appear as source transparent to
the real–time user.
In the past, technological, financial, platform, physical and personnel constraints put
limits on establishing the battlespace for operational testing or training. Declining budgets
and the demands for more thorough testing and more realistic training exacerbates these
concerns. These conflicting factors are causing the DoD to reconsider and redefine how to
perform testing and training functions.
Two technological advances provide the means to overcome these obstacles; namely the
Global Positioning System (GPS) and Distributive Interactive Simulations (DIS). GPS
provides a world–wide three–dimensional positioning accuracy of 4.3 feet, verified on
calibrated ranges (see Reference A). GPS also provides universal time tagging to an
accuracy of better than 100 nanoseconds. GPS is becoming the accepted positioning and
timing source for all Department of Defense (DoD) testing and training ranges. DIS
provides standard protocols and interfaces that permit the exchange of data between real
and simulated players located at diverse locations.
The maturing of GPS, DIS, and visualization systems, coupled with a communications
network designed for the collection of GPS and platform data from high, medium and low
dynamic platforms, contribute to more effective and efficient testing and training exercises.
These technologies have already started a culture change within the DoD in how we plan to
conduct testing and training exercises.
Experiments demonstrate that the timing and positioning accuracy of the GPS system,
coupled with established DIS techniques will aid in the creation of a synthetic battlefield for
testing and training activities. This battlefield will accommodate high, medium, and low
dynamic platforms. In this battlefield, the use of simulated participants will greatly reduce
the number of live participants. This will facilitate cost–efficient testing and training
exercises in a battlefield resembling the environment in which we expect to engage our
adversaries.
The DoD is presently placing GPS–based assets on all its major test ranges. GPS will
be the Time Space Positioning Information (TSPI) source. A new target control system is
being implemented using GPS as the TSPI source. All new training systems and system
upgrades specify the use of GPS as the positioning and timing source. This acceptance of
GPS as the standard for positioning and timing data is the first step toward effective
augmentation of live testing and training with simulated and live participants.
In summary, the precise timing and accurate world–wide positioning available from
GPS permits the simulation community to obtain the modeling fidelity necessary to replicate
high dynamic platforms in a synthetic battlefield. DIS standardized protocols and interfaces
designed for open architecture computer systems permits the exchange of data between
widely diverse locations. A data link is in place to permit the real time collection and
distribution of information. This paper discusses the technologies and experiments that
demonstrate the ability of these technologies to support test and training activities.
In the early 1980s, a DoD study recommended that GPS–based equipment suites be
developed as a TSPI source for testing and training applications. The DoD named the GPS
Range Applications Joint Program Office, or RAJPO, to develop this GPS–based
instrumentation for use on major Army, Navy and Air Force test ranges. This TSPI is to be
available at a central facility in real time. It was determined that there is no datalink
available to all the services meeting this requirement. The RAJPO, aided by the ranges,
determined that the requirement could only be met by the development of a robust data link.
The instrumentation has to provide the TSPI without interfering with, or increasing the cost
of, the system under test. Reference A provides the details of the GPS receiver–based
RAJPO system, together with test results.
The pod consists of a GPS P(Y)–code receiver, a strapdown inertial reference unit, data
link transceiver, flash memory recording system, GPS and data link antenna systems, and
power supplies. When necessary, an encryption device is available to encode the data
transmitted from the pod to the base station.
A C/A–code GPS receiver that meets the requirements for TSPI operation with this
range system is available for low dynamic platforms such as tracked and wheeled vehicles,
ships, and some rotary–winged aircraft. For the tracking of tactical or strategic missiles, a
GPS–based translator system is available. Thus, the RAJPO has developed suites of
equipment to accommodate participants throughout the dynamic spectrum.
mobility, several ranges place the equipment in trailers. To assure network connectivity for
all mission scenarios, remote communication transceiver and antenna systems can be
strategically located throughout the range. Software is available to consider terrain
topography and mission profiles for determining the optimal location(s) for the remote
transceiver stations. To meet specific flight profiles, the remote relay sites can be
repositioned for optimum data link coverage.
The DLS can address up to 2000 participants from high, medium, and low dynamic
platforms. Time Division Multiple Access (TDMA) techniques accommodate the real–time
collection of GPS and platform data. The DLS can support up to 250 participants reporting
on a once–per–second basis. The system can support 25 participants reporting at a rate of
10 Hz. If it is necessary to involve more than 25 aircraft at this reporting rate, the range can
use additional nets. A future upgrade will permit the next group of 25 aircraft to use a
different frequency in the allocated band.
The DLS operates in the L–Band, in the 1350–1400 MHZ or the 1429–1435 MHZ
spectrums, and occupies 1.6 MHZ of the frequency band. The existing transmitted power
and antenna systems guarantee line–of–sight operation out to approximately 80 miles.
Practically speaking, the system regularly demonstrates reliable message reception at ranges
up to 150 miles from the reception site.
To assure interoperability and cooperability between ranges, each range will use
different frequencies and time slots. As a participant leaves one range and enters another
range, the participant is assigned a new reporting frequency. If a participant DLT does not
properly receive, or implement the transfer to the new range frequency, the DLT will switch
to a default frequency; thus, assuring sustained operation. Frequency spacing disciplines
prevent interference between nearby ranges.
Table 1 demonstrates the accuracy of the system in a variety of operating modes. These
data are the result of a detailed test program on the calibrated range at Eglin Air Force Base.
Sophisticated post mission signal processing techniques eliminate any anomalies in the truth
system. During the tests, the aircraft flew various high stress maneuvers, including Figure
Eights, Cuban Eights, Climb and Dive, Oval Tracks and High–G Loops. All maneuvers took
place in both clockwise and counterclockwise directions to evaluate the impact of pod
location on the quality of the data. Sometimes, multiple pods were flown on the aircraft to
determine the impact of pod location on the accuracy of the data. The goal was to consider
the effects of wing up/wing down, fuselage–induced signal blockage and multipath, and
location of a pod at inboard or outboard wing stations.
Table 1 1994 Flight Mission Error Statistics (F–15 and F–16 Aircraft, Various High G Maneuvers)
ITEM ABSOLUTE ABSOLUTE DIFFERENTIAL DIFFERENTIAL
UNAIDED AIDED UNAIDED AIDED
Spec Average Spec Average Spec Average Spec Average
Horizontal Position (ft) 26 20.2 23 8.9 14 8.2 8 2.3
Vertical Position (ft) 43 31.3 38 15.6 23 10.3 13 3.7
Horizontal Velocity 20 7.9 1.7 0.7 20 5.6 1.7 0.41
(ft/sec)
Vertical Velocity (ft/sec) 34 4.8 2.7 0.7 34 4.5 2.7 0.4
Tests at various low and high altitudes, and over water (where high multipath
interference can affect communication performance) did verify message integrity. The test
program was able to validate message integrity from aircraft operating at relative speeds of
Mach 2.5. To validate the GPS TSPI performance, the DLS had to provide consistent and
reliable messages throughout the test period. Testing did verify the message relay
capabilities in various flight scenarios by using as relay nodes other aircraft, remote date
link sites, and combinations of aircraft and remote datalink sites.
The use of unmanned targets operating as enemy platforms are an integral part of DoD
testing and training exercises. The ability to control a mix of airborne and surface targets is
a necessary component of these exercises. The targets closely replicate the threat, flying
pre–planned flight profiles, while meeting ground and flight safety requirements. The targets
fly autonomously, or under the management of a flight controller.
The inherent capabilities of GPS will permit NGTCS to achieve the necessary
positioning and system timing requirements for autonomous and remotely controlled flights.
Augmentation with modern display and control systems will satisfy the mission
requirements. The remaining major element of the system requiring definition is the data
Figure 2 The Next Generation Target Control System Overview
link. The communication system must uplink all control commands to the targets and
downlink all the positioning and telemetry information from the targets, while controlling
these targets. A variant of the RAJPO DLS is the baseline networking scheme for NGTCS.
The DLS capacity of 240 Kbps for each system net meets the present and potential
future NGTCS requirements, such as simulation capabilities. For high dynamic players
reporting at a 10 Hz rate, NGTCS can accommodate 6–8 platforms in a single net. Tracking
capacity can increase by adding nets to a range. Each range will develop its own procedures
for adding nets. Ranges must comply with local range frequency utilization constraints and
other range–specific operational disciplines.
Adaptations to the DLS are necessary to satisfy the NGTCS requirements. The system
latency for the DLS will be less than 50 ms. Latest state–of–the–art networking techniques
will enhance message integrity and availability. Network transfer schemes will be a part of
the system. The system will support multiple networks in the same range.
In 1994, the RAJPO and TASC initiated an effort to investigate the use of GPS and
DLS as the positioning, timing and communication sources to link live and simulated
participants through the use of DIS. The project is known as the DIS GPS Optimal Virtual
Range project, or Project DISGOVR. The primary products of this project were two
software applications, referred to as the Live Entity Broker (LEB) and the Live Entity
VisualizeR (LEVR).
LEB provides the real–time interface between the live world and the virtual world. LEB
accepts raw TSPI data from the RAJPO system and converts this data to DIS PDUs. LEB
also performs detection and filling of data dropouts, filtering and smoothing of vehicle
dynamics, and applies DIS dead–reckoning algorithms.
A subset of the live aircraft (five F–15s, four F–16s) used RAJPO instrumentation pods.
The TSPI from the pods was collected at the ground mobile DLS facility. The data were
passed to the LEB, which performed the conversion to DIS PDUs. These PDUs were
passed to LEVR for visualization as a synthetic representation of the real–world
battlespace. PDUs were also passed to a variety of DIS–compliant simulators. Over 150
participant hours of live entity integration were supported and recorded.
The success of Project DISGOVR in Roving Sands ‘95 led to the use of live aircraft in
another DIS–supported exercise. The Warfighter ‘95 exercise merged a constructive
simulation of a Korean theater conflict with a variety of live aircraft and virtual weapon
simulators, including a simulated AWACS aircraft. Although the synthetic battlespace was
over the Korean peninsula, the constructive simulation was physically hosted at Hurburt
Field, Florida. The virtual simulations were physically located at numerous sites nationwide.
For Warfighter ’95, the live aircraft flying over the Eglin Air Force Base range, were
injected into the synthetic environment in a manner similar to that described in the Roving
Sands ‘95 exercise. TSPI from the RAJPO instrumentation pods was collected at the mobile
DLS facility and passed to the LEB for translation into DIS PDUs. At this point, the activity
diverged from that accomplished in Roving Sands ‘95. For Warfighter ‘95, the LEB
virtually relocated to the Korean theater, before broadcasting the PDUs on the exercise
network. To the simulated exercise participants, the live aircraft appeared to be flying over
Korean airspace.
In both exercises, the integration of live participants into the synthetic airspace was
open loop. Live aircraft were injected into the synthetic battlespace, but the simulated
participants were never introduced into the live battlespace. At this time there are DoD
programs exploring the possibility of closing this loop. The closing of this loop will provide
a degree of feedback from the virtual battlespace to the live participants in a synthetic
exercise.
Figure 3 RAJPO and TASC Role in Warfighter ’95
CONCLUSION
GPS, modern data communication networks and modern simulation techniques are
providing the means to enhance testing and training through the creation of a synthetic
battlefield.
RAJPO GPS instrumentation is currently in use at all of the major DoD test ranges. A
precise positioning and timing source is available from the GPS/inertial package to support
synchronization and registering of synthetic battlefield events and participants. The DLS is
the networking scheme for the collection of the TSPI data on these ranges. By the turn of
the century, derivatives of these assets will be in use to control unmanned target vehicles
supporting testing and training exercises. All major DoD range procurements and upgrades
specify the use of GPS as the positioning source.
REFERENCES
Reference B: Wilson, Kris, et al. Virtual Defense: Real Time GPS in Simulated Military
Environments, GPS World, September, 1995.
THE CHALLENGE OF PROGRAMMED TRACKING LOW
ORBIT SATELLITES FROM MOBILE GROUND
STATIONS
Dietrich Hoecht
ABSTRACT
Two types of mobile tracking systems are described. They are composed of elevation-
over-azimuth-over-tilt and of an X-Y axis pedestal configuration.
The calibration methods for establishing time and geographical references are discussed,
as well as the challenges of minimizing the effects of system and environment induced
error contributors.
KEY WORDS
INTRODUCTION
Traditional tracking of low orbit satellites was accomplished from a permanently installed
ground station or from a pre-surveyed location. Most often an autotracking feed and
receiver are are used for moving targets; but autotracking is a significant cost driver for
such receive stations.
Late in the seventies Scientific Atlanta had built an L-Band meteorological receive
terminal, attempting to provide a low-cost system with programmed track only.
Technically it was successful, however, most customers did not opt for this approach, as
they perceived it somewhat as a ‘blind faith’ performance promise.
Today the market has changed, and program-only receive terminals have proven
themselves. Yet, the challenges are greater: X-Band and KU-Band frequencies ask for
higher pointing accuracies. The proliferation of remote sensing satellites, their wide spread
utilization and the broad customer base bring about the need to make such ground stations
transportable and even mobile - quickly deployable over roadways and by cargo airplanes.
Further complicating the matter, in some case widely separated bands, e.g. S-Band and
X-Band, need to be combined in one terminal. An autotrack version of such an antenna
feed is very expensive.
In order to circumvent the high cost of autotracking, and to permit the use of simultaneous
and multiple receive bands, Scientific-Atlanta has developed a 6.1-meter three-axis and a
4.3-meter X-Y type antenna terminal. Both employ program tracking. At the same time
they are built for mobility, which brings about an extra level of sophistication for
calibrating the controller for varying geographical locations.
Such a receive terminal requires rigorous planning and design discipline in minimizing all
of the tracking error contributors. The solution to this challenge is presented here.
Figure 1 depicts a simplified System Block Diagram of a 6.1 meter size antenna mobile
receive terminal, typically used for remote sensing applications. It employs a three-axis
tracking pedestal, which permits uninterrupted data reception and recording during
overhead satellite passes. A tilt axis is used for this purpose under an elevation-over-
azimuth pedestal, thereby moving the motion lock ‘keyhole’ away from zenith, i.e. the
overhead satellite path.
A tilt sensor, mounted on the body of the elevation structure, is used to remove verticality
errors. Those can be induced by quasi-static wind deflections, ground settling and non-
level positioning by the tilt axis below, etc.
The GPS receive subsystem is mounted to the reflector aperture. The function of this GPS
receiver is to provide geographical position location and time reference. Furthermore, the
primary axis of this 4-unit array of GPS antennas is accurately aligned with the pedestal
motion axes. During a time span of roughly one hour it undergoes a self-calibration
routine, after which the array primary axis orientation can be read. This angular read-out
shows the position relative to North, and therewith an accurate pedestal coordinate system
is established.
In order to visualize the implications of a design for program-only tracking of low orbiting
satellite targets, refer to the illustration in Figure 2 - Coarse System Error Block Diagram.
The primary determinator for making the program track approach work is the permitted
deviation by the ground receive antenna from its theoretical line-of-sight beam center. For
example, we might permit a 0.5 dB reduction in signal strength under normal operating
condition. For a 6.1 meter antenna at 8.4 GHz this translates to a beam deviation of about
0.17 degrees. This number must now be appropriated to the uncertainties particular to the
satellite path prediction, the terminal’s earth position calibration, the ground antenna
pointing error and its control protocol. In this particular case the GPS reference system is
mounted to the receive antenna; theoretically, some of the errors particular to the tracking
antenna also influence the accuracy of the azimuth calibration. That is shown in the
interdependency in the error block diagram. If another type of North reference calibration
is used, like tracking of a celestial body, it can be subject to a sizable environmentally
induced error from wind, etc.
The other variable, and quite often large error lies with the off-timing in the satellite orbit
prediction.
If the ephemeris data is relatively old, it may not include the latest path corrections. Often,
however, it is the timing that is off by a few seconds, rather than the coordinates being
incorrect.
Consider the orbit speed of about 0.07 degree per second of a low altitude satellite: at look
angles near horizon this speed appears relatively at 0.025 degree per second at the ground
station. Assuming three seconds uncertainty, this translates to a significant positional
prediction error of 0.075 degrees or 45% of the total budget! It is therefore important to
input fresh updates in the the ground station controller, before attempting a track.
A scanning acquisition routine is a helpful alternative, by using the measured timing error
for immediate correction of the orbital elements. The above mentioned error of 0.075
degrees can therewith be removed. Of course, such a correction also diminishes somewhat
the importance of always using the freshest ephemeral element data.
The ground station spatial reference to the satellite orbit shell includes altitude, longitude
and latitude. The antenna pointing must also fix its azimuth Zero position angle relative to
true North.
The approach used on Scientific Atlanta’s 6.1 meter terminal employs Trimble’s TANS
VECTOR® GPS location system. Four individual antennas are mounted in a cross-fashion
around the rim of the reflector antenna. The altitude, latitude, longitude and timing
information is very accurate for the purpose of defining the geometric relationship of the
satellite path and the earth location of the receive terminal. However, the center axis
between two of the four antennas defines the azimuthal position. Add to this the
uncertainty of this axis in relation to the pedestal azimuth Zero, and we have the error
toward true North.
Using the GPS position location method has the advantage of being time, environment and
geographically insensitive, i.e. an accurate fix can be made within an hour at any time
during day or night at any place on earth. The price has to be paid in the cost of the
equipment.
Other methods for determining North are available - with various restrictions and
drawbacks:
- Star calibration: this method can be very accurate, given the night optical
equipment, sufficiently accurate RF-to-optical alignment and the right atmospheric
conditions - and enough time! There are some spots on earth, where overcast skies persist
for many weeks and even months. This certainly limits the universal usefulness of this type
calibration.
- Sun and moon calibration: a tracking algorithm for these celestial bodies must take
into account the relatively large target, but it can be done accurately with the proper
mapping approach. Cloudy skies are no obstacle, however, at latitudes close to the South
and North poles the sun or moon are hidden, often for very long durations. So, there are
some geographical limits to using this approach.
Figure 3 outlines the sub-elements and structures, which constitute the mobile tracking
pedestal and antenna. The error block diagram shows the influence and interrelation of
various alignment, static and dynamic error contributors. The three-axis configuration of
elevation-over-azimuth-over-tilt applies to Scientific Atlanta’s 6.1 meter terminal. For
converting this diagram for the 4.3 meter two-axis X-Y system the upper two axes can be
renamed, and the tilt axis portion be removed.
In designing for an overall tight pointing error it is important to realize the importance of
making the load path through the trailer bed, the jacking mechanism and the ground pad
interface with the ground as rigid as possible.
Table 4 shows the error budget for the tracking terminal. The individual errors are
calculated and measured values for the 6.1 meter receive terminal. The tabulation includes
systematic and random errors at various elevation angles. Wind induced values are for
70 km/h steady state, with a typical gusting spectrum. As can be seen from the results, the
summary errors for the terminal are less than 0. 12 degrees RMS, and fit well within the
overall budget of 0. 17 degrees.
Photograph No. 1 shows the deployed antenna terminal. The design utilizes a special
trailer, which permits complete retraction of its wheels. This has two major advantages:
one, the space atop the trailer bed can be utilized to the fullest, in that the height inside
C-130 and C-141 cargo aircraft is limited to 2590 mm. The roll-on / roll-off method for the
loaded trailer allows it to move in and out of the aircraft with just a small amount of
clearance toward the cargo floor. Two, during operation the trailer ‘squatting’ close to the
ground maximizes the structural stiffness, and minimizes the antenna height above ground.
Therefore also the wind induced moment loads - and therewith the errors - are held to a
minimum.
Note that the terminal is deployed autonomously. No auxiliary handling or lifting devices
are needed for set-up and take-down.
Figure 4 depicts the 4.3 meter tracking terminal in operation. It uses an X-Y antenna
mount, which has much commonality with Scientific Atlanta’s 3 meter IRIDIUM®™
terminal. The latter is also used for precision program-only tracking at the 20 to 30 GHz
frequency band. Similar to the 6.1 meter system, the 4.3 meter terminal is making optimal
use of the space above the trailer deck for minimum transport height. Additionally, the
center part of the reflector and the spars with feed remain always mounted, assuring
accurate system erection, without recalibration or special alignment. The spars remain in a
folded position, while the pedestal is transported horizontally.
The X-Y axis configuration of the pedestal is ideally suited for applications of tracking
overhead satellite targets, without the complexity and high acceleration and velocities of a
tilt mounted elevation-over-azimuth pedestal. The X-Y axis configuration effectively
relocates the motion lock keyhole of an El-Az mount from Zenith to Horizon. If the trailer
is oriented such that the keyhole is oriented roughly to East/West, then all North/South
satellite orbit paths can be tracked very smoothly.
Photo
No. 1
CONCLUSION
ABSTRACT
Navigation is the means by which a craft is given guidance from one known location to
another. Since the global positioning system (GPS) is very accurate positioning system, a
personal navigation system based on GPS is very effective. From the user point of view,
the function of this system is to provide real-time positioning and timing data to the user.
The system consists of 6-channel GPS oncore receiver, a system controller & processor (
SC&P) card, a programmable liquid crystal display (LCD) and a keyboard. The 6-channel
GPS OEM card receives GPS signal from six different satellites at a time . After
processing the received GPS signal, it gives the result & status message to its output port
in a typical data format. The system controller & processor card receives this message
from the GPS OEM card and extracts the useful positioning & timing information in binary
form. After that it processes the data and displays it on the LCD display. The keyboard has
used to select the desired positioning & timing information on the display.
KEYWORDS
GPS OEM card, LCD, System controller & processor ( SC&P ) card, keyboard
INTRODUCTION
The personal navigation system has designed to get instant positioning & timing
information. Considering the simplicity and portability, any body can use the system for his
personal use as well. The positioning information contains the Latitude, Longitude, Height
( mean sea level ) & Height ( gps ). The timing information contains the date & time. The
system is based on GPS and microprocessor technology. Since the whole GPS
constellation was built up in 26 June 1993,GPS technology for receiver is growing very
fast. There are several types of receiver from different companies are available in the
market. Motorola 6-channel GPS oncore receiver has been selected for receiving and
processing the GPS signal.
INTEL 8031 single chip microcontroller has been used to receive and analyze the data
stream from GPS OEM card and after that it displays the result containing positioning &
timing information on the LCD display.
A programmable LCD display has been selected to display the positioning & timing
information . To select the desired information on the LCD display, a Dot Matrix type
Keyboard has been used in the system.
So, all the system’s technical requirements have been satisfied. The system is very
attractive to the people due to its simplicity and portability.
SYSTEM STRUCTURE
Keyboard
The L1 band signals transmitted from GPS satellites are collected by a low-profile,
microstrip patch antenna, passed through a narrow band bandpass filter, and amplified by a
signal preamplifier. These filtered and amplified RF signals are then routed to the RF
signal processing section of the OEM card. The RF signal processing section of the GPS
OEM card printed circuit board contains the required circuitry for downconverting the
GPS signals received from the antenna module. The resulting intermediate frequency (IF)
signals are then passed to the 6-channel code and carrier correlator section of the GPS
OEM card where a single, high-speed analog-to-digital (A/D) converter converts the IF
signal to digital sequence prior to channel separation. These digitized IF signals are then
routed to the digital signal processor where the signals are split into six separate channels
for code correlation, filtering, carrier tracking, code tracking, and signal detection. The
processed signals are then synchronously routed to the microprocessor to process satellite
data, pseudorange and delta range measurements for computing position and velocity. The
OEM card sends these results to its output port in different forms of messages. The length
of the data frame varies according to the type of the message. This system needs only
positioning ( latitude, longitude, height ) & timing ( Date & Time ) information to display
on the LCD. With other information, the Position/ Status/Data Output Message contains
these required information. The Position/ Status/ Data Output Message is a 68 byte length
message frame. For the communication between GPS OEM card and System Controller &
Processor card, Motorola Binary Format protocol has been used .The structure of the
Position/Status/ Data message frame is as follows:
@@BamdyyhmsffffaaaaoooohhhhmmmmvvhhddtntimsdimsdimsdimsdimsdimsdsC<CR><LF>
The first two bytes are the head of the message frame and the next two bytes after the
message head bytes are message identification bytes. The last two bytes indicate that this
is the end of the message frame. The Position / Status / Data message frame contains the
following information including positioning & timing data :
Date:
m-month 1.....12
d-day 1..... 31
yy-year 1900...... 2079
Time:
h-hours 0..... 23
m-minutes 0...... 59
s-seconds 0...... 60
ffff-fractional sec 0......... 999,999,999
( 0.0 to 0.999999999 )
Position:
aaaa-- latitude in msec -324,000,000 ..... +324,000,000
oooo-- longitude in msec -648,000,000 .... +648,000,000
hhhh-- height in cm
( GPS, ref ellipsoid ) -100000 ..... +1,800,000 (-1000 to +18000 meter )
mmmm-- height in cm
(MSL ref ) -100000 ..... +1,800,000 ( -1000 to +18000 meter )
Velocity:
vv-- velocity in cm/sec 0.... 51400
( 0 to 514.00 m/sec )
hh-- heading 0.... 3599
( true north. res 0.1deg) ( 0.0 to 359.9 deg)
Geometry:
dd-- current DOP 0.0 .... 999
(0.0 to 99.9 DOP )
t-- DOP type 0-- PDOP (in 3D mode )
1-- HDOP ( in 2D mode )
The system controller & processor card consists of INTEL 8031 single chip
microcontroller, 8 Kbytes EPROM, 8 Kbytes static RAM, RS-232 to TTL conversion
circuit, CPU reset circuit, keyboard interrupt generator circuit. All ONCORE GPS
receivers available in the market do not provide TTL serial data port only. Some of them
also provide RS-232 serial data port for data communication. But the serial port of INTEL
8031 microprocessor is TTL. Considering this problem during design, a provision has been
provided in the SC&P card to select the RS-232 to TTL conversion circuit when it is
necessary, so that the SC&P card can be used with any type of serial data port provided by
ONCORE GPS receiver.
The Oncore GPS OEM card works in two different modes. The Position Fix mode and
Idle mode. The Position Fix mode is the normal operating mode of the OEM card. In this
mode, the OEM card tracks the satellites' signal and performs the navigation solution.
After performing navigation solution, it gives the result data to its serial data outport. The
Idle mode is the reduced power mode. In this mode, it does not track the satellites' signal.
So to get the positioning and timing data, we must set the OEM card in Position Fix mode
and thus we have to send the input command to the OEM card in the following format that
initialize it in Position Fix mode.
@@Cg1C<CR><LF>
This input command is sent to the GPS OEM card during the initialization algorithm of
the system software.
After receiving the above input command, the GPS OEM card tracks the satellites'
signal and performs the navigation solution. After that, it sends the output message to its
serial out port that contains much information including useful positioning & timing
information. As we have mentioned earlier, the Position / Status / Data message carries
much information including Positioning & Timing information that is required for our
system. The main task of the System Controller & Processor card is to receive the Position
/ Status / Data message, then extract the Positioning & Timing information and finally
display those information on the LCD as selected by the user. To perform these tasks, The
System Controller & Processor card first establishes serial port data communication with
GPS OEM card during the initialization algorithm. Then it judges whether the head of the
Position / Status / Data message frame has received or not. When the SC&P card finds that
the message head has received, then it continues to receive the message bytes . With the
receive of each byte, it also makes a count of the byte. When the received bytes equal to
the message length, then it compares the received last two bytes with the message end
flags. Carriage Return & Line Feed are the message end flags. If it finds that the received
last two bytes are the message end flags, then it copies the whole message frame from its
internal memory to its external memory for further data processing.
The Position / Status / Data message contains more information than it is required for
this system. These information are in the binary form. The SC&P card extracts the
Positioning & Timing data from this message frame. The Position data contains Latitude,
Longitude in msec & height in cm. It contains two types of height data. One type of height
data is calculated with respect to mean sea level and another type of height data is
calculated with respect to ellipsoid. The Timing data contains current date in year, month
& day and time in hour, minute & sec. After extracting the useful Positioning & Timing
data from Position/ Status / Data message, the SC&P card converts the Latitude &
Longitude data into degree, minute & second and height data into meter. To display these
real-time data on programmable LCD, the SC&P also converts these extracted data into
BCD form.
After performing above real-time data processing, the external RAM stores the results.
The Programmable LCD display is two lines & 16 characters per line type display. So,
only two types of data can be displayed at a time. The total data to be displayed have
divided into three screens. The first screen will display the current date & time, the second
screen will display the Latitude & Longitude of current position in Degree, Minutes & sec.
and the third screen will display the height with respect to ellipsoid & height with respect
to mean sea level in meter. The keyboard has given to the user to select the screen. The
keyboard has connected to the port-1 of 8031. The initialization algorithm of the system
software initializes the port-1 to a particular preset binary value. If any user press one key,
then the Keyboard interrupt generator circuit of SC&P card generates an interrupt to CPU.
When CPU acknowledges the interrupt, it reads the binary value from port-1 and compares
it with the preset value. If the binary value in port-1 does match with any one of preset
values, the CPU will display that screen on the LCD according to the preset binary value.
Start
STOP
CONCLUSION
The Personal Navigation System Based On GPS is one of the most simple and
effective application of GPS in day to day life. The system although have some limitations,
it can measure height within the range -1000 meters to +18000 meters. If it is used above
this limit, the height output will be clamped to the maximum value. In addition, the
Latitude & Longitude data will be incorrect. The positioning accuracy of this system is less
than 25 meters, SEP without any selective availability ( S/A ) and with selective
availability degraded up to 100 meters. The Timing accuracy is 130 nanosec. observed
with S/A on. At present, the whole system is in use in the laboratory in a single compact
unit. This system may be integrated with the electronic map technology to build a personal
automobile tracking system. This system may also be used for geological survey and
archeological expedition.
REFERENCE
ABSTRACT
The Global Positioning System (GPS) is a very accurate, all-weather, world wide three
dimensional navigation system and it has been used in almost every field related to
positioning and navigation. This paper presents a new application of GPS technology—
personal positioning and navigation system. It combines VP ONCORE receiver OEM
(Original Equipment Manufacture) board and an intelligent system controller, with a
keyboard and a programmable LCD as its peripherals. This system can realize rich
navigation functions and satisfy the need of personal use.
KEY WORDS
1. INTRODUCTION
GPS was designed and implemented for military purpose at first, but it has been adopted
for public services worldwide due to its high performance. We now make efforts in the
research and development for personal application of the GPS technology.
However, GPS receiver is a positioning equipment and can only give accurate coordination
information of the current position. To realize real time navigation, we design an intelligent
main controller to control and communicate with the GPS receiver. It uses its internal
algorithms and program to analyze and process messages sent by the receiver to guide a
moving object. Concerning the feature of personalization and portability, we also consider
low cost, multi-function, low power consumption and small dimension in the design.
2. WORK PRINCIPLE OF GPS RECEIVER
Motorola VP ONCORE OEM board is selected as the receive part of this navigation
system. ONCORE OEM board is specifically designed for embedded applications, it is a
6-channel receiver capable of tracking six satellites simultaneously. The signals transmitted
from GPS satellites are collected by a low-profile microstrip antenna and routed to the RF
signal processing section. Here they are downconverted and converted to a digital
sequence. This digitized IF signal is passed to the digital signal processor where it is split
into six separate channels for code correlation, filtering, carrier tracking and signal
detection. The microprocessor section computes the satellite data and pseudorange
measurements to extract position, velocity and time and sends these results to its I/O port
in a predefined protocol.
Motorola ONCORE receiver provides three different user selectable I/O protocols:
Motorola Binary Format; National Marine Electronics Association (NMEA)-0183 Format;
LORAN Emulation Format. Motorola binary format provides a most effective and
complete I/O instruction set and its transfer rate is as fast as 9600 bit/s, so it is selected as
the I/O format in our application system. All Motorola binary format messages are
formatted in sentences that begin with ASCII @@ and end with ASCII <CR><LF>. The
first two bytes after the @@ are the message ID bytes that identify the particular structure
and format of the remaining binary data. The byte before <CR><LF> is a checksum C (the
exclusive-or of all message bytes after the @@ and prior to the checksum), used to check
the validation of the whole message.
In order to work correctly, the ONCORE receiver must be initialized first. This includes
the initialization of I/O protocol mode, operating mode, data transfer rate and GMT time
difference. The initialization commands are stored in the EPROM of the system as a
constant table. They are sent to the receiver through the I/O port according to the
predefined format.
• switch I/O format command
@@CimC<CR><LF>
m -- format 0 -- Motorola Binary Format
1 -- NMEA-0183 Format
2 -- LORAN Emulation Format
• select operating mode
@@CgmC<CR><LF>
m -- mode 0 -- Go to idle mode (i.e. a reduced power mode)
1 -- Go to Position Fix Mode (i.e. the normal operating mode)
• set position/status/data output rate
@@BamC<CR><LF>
m -- mode 0 -- output response message once (polled)
1..255 -- response message output at indicated rate (continuous, once per n
second)
• set current GMT time correction
@@AbshmC<CR><LF>
s -- sign 00 -- positive ff -- negative
h -- hours 0 .. 23
m -- minutes 0 .. 59
Based on the GPS ONCORE OEM receiver, a personal positioning and navigation system
is designed. A simplified structure block diagram of the system is shown in figure 1.
Display Power
I/O
extension Clock
Circuit
RAM ROM
As a positioning equipment, GPS receiver can only give the absolute positioning
information – latitude, longitude, height, velocity, direction, date and time of current
moving object. To realize real navigation, special algorithm must be designed to calculate
the range and heading of a route. Fig. 2 illustrates how to calculate the range.
N
B
A
r
A C
Aa O Ba ω O
Ag
PB
PA
Suppose A is the start point and B is the destination. The latitude and longitude of A and B
have been known as (Aa,Ag),(Ba,Bg). APA and BPB are two lines vertical to the equational
plane.
θ 1 = Bg − Ag
PA PB 2 = OPA 2 + OPB 2 − 2 × OPA × OPB × cosθ 1
= ( r cos Aa )2 + ( r cos Ba )2 − 2r 2 cos Aa cos Ba cos θ 1
AB 2 = PA PB 2 + BC 2
= ( r cos Aa )2 + ( r cos Ba )2 − 2r 2 cos Aa cos Ba cos θ 1 + ( r sin Ba − r sin Aa )2 (1-1)
Where r = the radius of the earth
θ1 = the included angle of OPA and OPB
The actual route is the arc from A to B.
ω = 2 arcsin( AB / 2r )
AB = r × ω = 2r arcsin( AB / 2r ) (1-2)
The next step is to get the heading from A to B. For better understanding, the left part of
Fig. 3 should be compared with the right part, which is the tangent plane diagram of the
heading.
Y N
X
N
B
ωo Z
O
A B X ωN
O ωo
ωB A
Y
Line AX is the tangent line of the circle determined by A, N and O. Line AY is the tangent
line of the circle determined by B, A and O.
So the included angle of AX and AY, that is ω0, is the current heading from A to B.
ω N = 90 0 − ∠NAO = 90 0 − 180 −∠ = 450 − Aa
0
NOA
2 2
ω B = 90 0 − ∠BAO = 90 0 − 180 =
0
−∠AOB ω
2 2
Generally the truth north is defined as 0o in navigation. The course range is from 0o to 359o
if rotate clockwise. The ω0 in the above equation is an acute angle. The actual heading
should be selected from ω0, 180o±ω0 or 360o±ω0 according to the relative position of B to
A. As the object moves from A to B, the range and heading change from time to time.
Since the current position of the object can be obtained from the receiver, using the above
algorithm, we can get the range and heading of any point in the route.
Based on the design of hardware and the above algorithms to get the heading and range,
rich navigation functions are realized in this personal navigation system by programming
elaborately.
(1) Display satellite status
This function gives a summary of satellite visibility status by showing the number of
visible satellites and tracked satellites, DOP (Dilution of Precision), the ID, elevation,
Azimuth and signal strength information for each visible satellite.
6. CONCLUSION
This paper described the theory and method used in developing the personal navigation
system. Its model has been completed in the lab. All this shows the system is practical and
very useful. Next we will introduce GIS (Geography Information System) into it to make it
more visual and more convenient to use. We forecast a wide market for it.
REFERENCES
FILIBERTO MACIAS
TELEMETRY DATA CENTER
ABSTRACT
The Telemetry Data Center (TDC) at White Sands Missile Range (WSMR) is now
beginning to modernize its existing telemetry data processing system. Modern networking
and interactive graphical displays are now being introduced. This infusion of modern
technology will allow the TDC to provide our customers with enhanced data processing
and display capability. The intent of this project is to outline this undertaking.
KEY WORDS
INTRODUCTION
Telemetry data processing systems typically all perform the same tasks. These include the
acquisition of the data, decommutation, and processing of this information. Finally, the
data is displayed and analyzed. Recent introduction of intelligent networks, interactive
graphics, and animation have greatly improved the data visualization and analysis
capability of telemetry data processing engines. The ability of the system to process and
deliver a trustworthy model of the “real world” determines the true merit of the system’s
capability. The true objective of the processing system is basically to provide the data
analyst with a better perception of what occurred or is occurring during a test. Modern
networking and graphical presentation offer an augmentation to data displayed as a
mathematical presentation (graph, plot, etc.) on a video screen or strip chart recorder.
However, certain parameters, depending on response or sampling time, are more
conducive to presentation on strip chart; others offer great possibilities to be used to drive
graphical presentations. The advantage with the use of telemetry data to drive displays is
that attitude information, guidance and other pertinent data can be obtained from Inertial
Measurement Units (IMU) usually configured into modern missile systems. Also,
telemetry data is the only method that can provide the information to conclude why a test
was successful or not. Therefore, combining modern telemetry data processing with
modern networks and graphical systems will only prove to enhance “real time” displays.
Modeling using simulated or “real” data could be used for a variety of applications which
will be discussed in this paper. The Telemetry Data Center (TDC) at White Sands Missile
Range (WSMR) is now in the process of introducing intelligent networking and enhancing
its display capability to include animation and visualization.( see figure 1.)
RADAR TELEMETRY
TRACKING TRACKING
STATION STATION
DATA TELEMETRY
CONTROL DATA
FACILITY CENTER
REAL-TIME OPERATIONS
CONTROL SYSTEM
(3280)
H
GRAPHICS
SUPPORT
FACILITY
R
figure 1
The core of the TDC’s processing capability is centered on three separate and almost
identical systems designated as Telemetry Data Handling Systems (TDHS). These systems
are designated as TDHS-A, TDHS-B and TDHS-C. Each TDHS is a stand-alone system
capable of performing data distribution, decommutation, data tagging, merging, processing,
archiving and finally the display of data. Each can support operations in three possible
modes such as preflight, real-time and post-flight exercises. All TDHS systems are
configured with EMR 8715 Telemetry Processors as the heart of its processing capacity.
Though all of the systems are quite capable of handling all of the TDC’s present support
requirements, the existing TDHS A/B used for primary support by TDC are aging. These
systems are based on mid-1980s technology. Each TDHS is configured with Concurrent
3280 computers used for telemetry front-end setup, telemetry processing setup and data
archival. This configuration (TDHS/ Concurrent) lacks all the current standards of open
architecture or networking. Therefore, it is not conducive to any real time animation.
The current processing environment in which the TDC operates during real time and
during other processing activities could be described as a serial mode. Telemetry data
arrives from a variety of sources. It is decommutated and processed in the EMR 8715. The
EMR 8715 sends its data out to various destinations which include other Concurrent
systems for further processing, and finally display. This extensive routing of data
throughout the system increases the time before processed data arrives for display. It
becomes obvious that the additional time affects the deterministic qualities of a system.
Before any realism in any real-time visualization can be achieved, two problems would
have to be resolved. These are data latency and the lack of any multiprocessing or parallel
processing. (see figure 2.) The introduction of multiple processing engines co-existing on a
network, each responsible for a certain aspect of telemetry data processing has now been
implemented onto TDHS-C. The direct benifits of this implementation are:
3280
VIDEOS
STRIP CHART
RECORDERS
3280
VIDEOS
STRIP CHART
SYSTEM RECORDERS
B TDHS
COMPUTER STRIP CHART
RECORDERS
SYSTEM Workstations
C OTHER USERS
ANALOG
MAGNETIC
TAPE
figure 2.
TELEMETRY DATA PROCESSING SYSTEM
The Telemetry Data Processing engine on TDHS-C is a Lockheed Martin O/S 90 8715.
The O/S 90 8715 is well suited to acquire and perform the required processing on specific
data parameters using a vast complement of data handling algorithms. It is configured with
networking capability (Ethernet/ FDDI) such that data products can be delivered to
multiple destinations simultaneously. The O/S 90 uses Transmission Control
Protocol/Internet Protocol (TCP/IP) for broadcast of data. The 8715 has the capability of
performing a large number of data processing functions at high speed. It can accept integer
or floating point data, either in raw or engineering units form. Data processing performed
within the 8715 solves the problem of data latency since the additional processing that was
once performed by various computer systems can now be performed within the telemetry
processor. The O/S 90 8715 utilizes the computational capacity of multiple SPARC
computers within the unit to perform the additional data handling. Subsequently, the need
for other systems to perform additional tasks is eliminated or at least reduced. (see
figure 3.)
OLD vs NEW
SETUP
TDHS
A or B
* WITH O/S90 UPGRADE, SETUP &
SYSTEM SYSTEM LOGGING IS DONE WITH THE 8715
LOGGING
A or B C
3280
G or F TM DATA
DISPLAYS
SGI
PROCESSED
TM DATA WORKSTATION TM DATA
NETWORK
OTHER DISPLAYS
3280
WORKSTATION
PROCESSED
TM DATA
END USER
END USER
figure 3.
DISPLAY ENGINE
The O/S 90 Telemetry Processor is configured with a device called an Ethernet Output
Module (EOM). This device provides the connection between the telemetry processor and
the real time network. Resident on the network exist several workstations, each with a
specific task to perform. For example, one station acts as the system server, responsible
for all interaction (database creation, system administration, etc.) with the processor.
Situated on this real time network, is a Silicon Graphics workstation running a graphical
software system called Designers Workbench (DWB) developed by Coryphaeus Software.
This software is used for database modeling, dynamic display development and provides
an environment for simulation activities. DWB provides the user with the capability to
create, view, modify and animate three dimensional graphical presentations. The two
major features of the DWB is its ability of constructing databases used for graphical
rendering and its ability in linking these databases to actual real time data.
The DWB requires a 4D series Silicon Graphics IRIS workstation using 4D series IRIX
operating system of version 5.2 or greater. The DWB used by the TDC is configured with
two software modules. These are the Link/Animation Editor (LE) and the Real-Time
Module (RTM). DWB allows a programmer to use databases built to run as standalone
without the need of an editing environment. Drawing routines are optimized in an effort to
maximize the update rate of a particular display. This in effect allows for the creation of
interactive displays. However, performance is entirely dependent on the hardware
available and in use. The LE is used to support quick verification and testing of graphical
displays. The LE provides DWB with the means to link database elements as defined as
needed. During telemetry operations (real time or otherwise) the needed or desired
processed telemetry parameters are sent via the Ethernet from the O/S 90 8715 to a
workstation running DWB. The user is allowed to select items in a database, tag or
identify them for uses in the desired task to be performed. ( see figure 4.)
The intent is not only to be able to provide the normal or present complement of existing
real time displays, such as the mathematical representation, plot, graph, etc., on a strip
chart or computer screen, but to combine various parameters to augment these data
presentations. It is possible to provide, in real time, computer generated visuals to enhance
a persons perception of a particular event or sequence of events. Engineering analysis will
become more efficient through the integration of real-time archived data with simulated
data. Cause and effect activities could allow for realistic simulation using real time data to
validate it. The resulting simulated modeling is recognized as a viable tool for solving
TDC OPEN SYSTEM
* Generates high-end TM
graphical displays
DISPLAY
SYSTEM SYSTEM SYSTEM SYSTEM WORKSTATION
A B C D
FDDI
NETWORK
REALTIME
SYSTEM
VIDEO
1
VIDEO
CELL CELL CELL 2
A B C
VIDEO
3
VIDEO
NOTE:CELLS ARE REALTIME DISPLAY AREAS 4
figure 4.
complex problems or developing scenarios that can have a great impact on real time
activities.
The TDC is using this graphical capability to create numerous attitude models of missile or
aircraft systems now being tested at WSMR. One of the more visible and important use of
this aspect of telemetry display has been in support of Missile Flight Safety (MFS).
Typically, the type of data that MFS uses for their real time decision making process
includes vehicle guidance, vehicle dynamics and attitude information. All of this data is
easily obtained from normal telemetry data processing. Up to this point, support of MFS
was to provide data displays on a strip chart or CRT scrolling tabular information or plots
and graphs. By combining various parameters, indicating vehicle attitude and guidance,
performance of this vehical can be depicted.
Before a real time operation, MFS will request from a missile developer a hardware-in-
loop simulation. This simulated data will illustrate to MFS how a vehicle is expected to
perform during a flight. Since the data is simulated, a nominal flight as well as a missile
failure can be created. This data is processed through the telemetry system and fed to the
real time displays for MFS verification. Also, MFS will use this opportunity to familiarize
themselves with important mission events that are expected to occur (missile lift, motor
cutoff, booster separation, etc.). The use of attitude models using hardware-in-the-loop
simulations has provided MFS with the realism of a real time mission. Other proposed
projects are:
• Synthetic Video
CONCLUSION
The TDC’s effort in modeling and simulation is admittedly still in its infancy. We are
continuously learning to improve the fidelity of our graphical presentations. The advantage
in using real data is that the renderings can be validated with actual data. This results in
realistic simulated scenarios. Also, the use of commercial-off-the-shelf (COTS) hardware
and software has provided us with the conduit to enter this arena quickly. COTS software
also provides a cost and time savings approach to having to develop these systems from
scratch.
MERGING OF DIVERSE ENCRYPTED
PCM STREAMS
Harold A. Duffy
Naval Air Warfare Center
Weapons Division
Code 523100D
China Lake, CA 93555
ABSTRACT
The emergence of encrypted PCM as a standard within DOD makes possible the
correction of time skews between diverse data sources. Time alignment of data streams
can be accomplished before decryption and so is independent of specific format. Data
quality assessment in order to do a best-source selection remains problematic, but
workable.
KEY WORDS
INTRODUCTION
There has been a trend in test ranges toward use of remote telemetry data relay sites to
ensure continuous coverage. It is desireable to be able to construct a composite data
stream for real-time or nearly real-time display. The growing use of encrypted pulse code
modulation (PCM) and progress in custom microelectronics together motivate a
re-examination of how data-stream merging might be accomplished in a robust manner and
with a minimum of operator intervention.
The current investigation is biased toward format independence insofar as that goal might
be practical. By minimizing the effort required to set up for new test projects, a ground
station improves its ability to economically support a rapid turnover of such projects. Time
alignment of streams can be accomplished before decryption. Unfortunately, reliable
assessment of data quality may require an examination of the data after they have been
decrypted and decommutated. There is a realistic hope for the future in adopting error
control codes (ECC). Such codes placed on encrypted data would provide a measure of
data quality and make truely format-independent merging a reality.
BACKGROUND
Best source selection of data before decommutation has been in use at the Atlantic Fleet
Weapons Test Facility (AFWTF), Puerto Rico, for several decades. No time skew
correction has been attempted. The data quality indicator is receiver quieting, as evidenced
by amplitude of the detected signal. Receiver output level calibration is normally
performed before each test.
The closest form of data merging to that advocated by this paper was attempted by Hunter
Research, Inc. for PMTC, Point Mugu, California. Available documentation does not
permit an indepth review of the automated best-source selector (ABSS), but it was
apparently using time skew correction circuits at least superficially similar to those
discussed in this report. The ABSS attempted to work with plain-text PCM data and to
extract data quality information from within each stream. In Phase I, majority voting was
attempted. In Phase II, frame sync quality was examined. The Hunter Research contract
ran from 1990 to 1993.
Most test ranges appear to have successfully applied computers to data merging on a
frame-by-frame basis. Such a method is dependent on application of detailed knowledge
about the data format. While one cannot argue against the success of such a powerful
technique, it would appear that there is a place for methods that require less time
investment.
There will be significant time skew between data received locally at a telemetry ground
station and data relayed from a remote receiving site (Figure 1). The communications link
from the remote site to the ground station is assumed to transmit data at essentially the
speed of light. Additional delays that might result from digital multiplexers are not within
the scope of this discussion. For example, assume a distance D from the ground station to
the relay site of 15 km. The maximum time skew between direct and relayed data will be
FIGURE 1. Relay Geometry.
If PCM data at a bit rate of 2 Mbps is assumed, then the maximum data skew will be 200
bits. In general, time delay between direct and relayed streams is variable, and dependent
on test item dynamics. The phase relationship between the two signals will also be
continuously variable. Total time skew could be viewed as the time integral of the
difference in Doppler-shifted bit rates observed at the two receiving sites.
Figure 2 depicts a minimal modification to a best source selector switch. The difference is
that all streams entering the switch would be time aligned and there would be no data
corruption due to switching between misaligned streams. This approach is advocated for
its conceptual simplicity. There are two implications of the design that appear to be
inevitable, asynchronous clocking and seek-center mode.
Asynchronous Clocking. It is necessary to force phase coherency upon data streams that
have an arbitrary phase relationship. This requirement is created to avoid read/write
conflicts at the delay memory. Further, asynchronous clocking creates the opportunity to
enter or delete read cycles, thereby single-stepping memory delay without resetting the
memory pointers. The operational impact of asynchronous clocking is that an output bit
sync must be avoided. Data and its asynchronous clock must both be taken to a decryptor
or an archival recorder. Asynchronous clocking is characterized by loading edges that are
time aligned to a common internal reference clock running faster than the incoming data
clocks.
Seek-Center Mode. To prevent a slow upward or downward drift in the average delay
magnitude of all delay modules (Figure 2), it is necessary to alter the operation of each
delay module when picked as having best current data. When so picked, the delay module
must have its tracking loop inhibited and must slowly adjust its delay magnitude to
half-scale, probably mechanized by adding or deleting read cycles as mentioned above.
DELAY MODULE
Assuming two perfect data streams, one a reference and the other an assigned input, the
only active task for the delay module is to acquire initial alignment. FIFO pointers will be
initialized according to a delay value found by a digital correlator (Figure 3). The two data
streams will continue to track each other even if the delay between them changes with
time. In practice, the first-in, first-out (FIFO) memory was implemented in conventional
random access memory (RAM) so that the address pointers would be accessible.
FIGURE 3. Minimal Delay Module.
In a real situation, there will be corruption of the data streams and of the clocks. Clock
corruption, or “slips”, will cause an alignment error that must be identified and then
corrected. The degree of effort expended on getting a fast realignment should logically
depend on how much improvement would result. If the delay is less than error extension
due to encryption, then there is no justification for sophistication beyond basic operability
(Figure 3). There is no point in trying to reacquire alignment before loss of correlation due
to error extension (about 100 bits) has cleared. Because reacquisition in a minimal
configuration will require an interval equal to the delay period, same additional circuitry
may be justified if delay period is appreciably greater than 100 bits.
Methods for rapid acquisition have been envisioned (Figure 4). The one actually
implemented for testing contains continuous comparators that search for stream slippage
several clock cycles ahead and behind the reference. Small temporary corrections can be
made until the FIFO delay is updated. In addition to this scheme for “single-stepping” back
into alignment, it would be possible to reset with minimal data loss. It is not necessary to
zero the write counter address (write pointer) after each reset. The write counter could be
allowed to run freely, and a new read counter address (read pointer) could be derived as
the sum of current write address and the 2’s compliment of delay value.
The digital correlator used for testing is a TRW 1023. It is now manufactured by Raytheon
as a TMC 2023. The correlation interval of 64 bits allows the true correlation peak to
stand out clearly from background noise. The digital correlator is scanning total delay
range continuously to keep a best current delay value stored in a register.
DATA QUALITY ASSESSMENT
Before decommutating the diverse data streams, the potential data quality indicators
include the following:
a. Receiver AGC
b. Receiver video amplitude
c. Receiver deviation
d. Bit synchronizer S/N
e. Error control codes
Receiver diagnostic signals will have a varying degree of sensitivity to multipath, but all
will probably require some type of pre-test calibration. Bit synchronizers built by DSI,
Inc., have had an optional soft decision board that produces a diagnostic signal varying
with S/N. Such a feature would hopefully be free from pre-test calibration, but its ability to
recognize multipath degradation is unknown. While not currently in use by DOD, error
control codes would make possible data quality assessment. Both block codes and
convolution codes make possible the derivation of a data quality diagnostic.
A discussion of the examination of frame sync patterns has been avoided so far because
the ideal of format independence would be compromised. Nevertheless, this may be one of
the best indicators of data quality that is currently available. The construction and
operation of diagnostic hardware could be made reasonably simple.
CONCLUSIONS
DOD adoption of encrypted PCM as a data standard has brought the ideal of format-
independent data merging closer to reality. An examination of skewed data streams will
show only one major correlation peak to enable alignment. The construction of suitable
circuits has been demonstrated.
Data quality assessment remains a problem. All practical techniques require some operator
intervention. Receiver quieting and frame sync quality stand out as perhaps the most robust
indicators at this time.
Error control codes could be a source of data quality information in addition to improving
data quality. Merging of such data would be truly format-independent and could be
accomplished with automatic ease.
AUTOMATED GENERATION OF TELEMETRY FORMATS
ABSTRACT
The process of generating a telemetry format is currently more of an ad-hoc art than a
science. Telemetry stream formats conform to traditions that seem to be obsolete given
today’s computing power. Most format designers would have difficulty explaining why
they use the development heuristics they use and even more difficulty explaining why the
heuristics work. The formats produced by these heuristics tend to be inefficient in the
sense that bandwidth is wasted. This paper makes an important step in establishing a
theory on which to base telemetry format construction. In particular it describes an
O( n log n ) algorithm for automatically generating telemetry formats. The algorithm also has
the potential of efficiently filling a telemetry stream without wasting bits.
KEYWORDS
Telemetry, Data Formats, Telemetry Tree, Telemetry Tiling Algorithm, Integer Tiling
INTRODUCTION
Telemetry format design, sometimes referred to as Data Cycle Map (DCM) generation, is
the process of taking a set of parameters with given sample rates and given word sizes and
fitting them into a fixed length string of bits called a frame. Mathematically this is
equivalent to tiling the integers modulo the frame length with specialized tiles. This
connection will be discussed later. Ideally, a telemetry format design, or a DCM meets the
following (*) criterion:
(*) Every data sample of every parameter (measurement on the test article) is
transmitted at the time that it is sampled.
By “at the time it is sampled” we mean after it has been sampled and sent along whatever
wires it needs to go along before it is available to the data transmission system. Since
during most telemetry applications only one bit stream is transmitted, the (*) criterion
implies that no two samples are sampled at the same time and that the transmission rate is
sufficient to send all bits of all samples within the telemetry frame. If we are to be very
precise, (*) also requires that the sample and transmission clocks be synchronized. Also
note that if (*) can be satisfied, which includes sending every parameter, then there can be
no wasted bits. Bits in the telemetry stream that do not represent data are place-holders to
ensure that (*) is satisfied and synchronization is maintained.
One of the major traditions involved in designing a format is the use of fixed length words
and fixed length minor frames. Why does this fixation on fixed lengths exist? The use of
fixed length frames is due to the basic construct of repeated data. That is, within every
time period you expect to receive a given set of data. Even though there is an increasing
use of randomly sent telemetry data (e.g. MIL-STD 1553 messages), this seems like a
reasonable use of the fixed length concept if for no other reason than that it makes the
problem finite. The use of fixed length minor frames (most notably in PCM matrixes)
seems to be mostly a left over from an attempt to simplify the organization of a telemetry
stream. One redeeming factor of minor frames is synchronization. However, this is really a
confidence factor and minor frame synchs do not need to arrive at fixed intervals. The use
of fixed length words is most likely a result of computer byte and word lengths and the use
of such may have made sense in the days of slower computers. However, ultimately the
use of fixed length words is just a blocking method for counting bits. Considering that
telemetry transmission rates do not come close to contemporary CPU speeds, the reason
for using fixed length words becomes less clear.
One of the main heuristics used in the designing of formats is the power-of-two rule. That
is, make every transmission rate a power of 2, e.g. 2, 4, 8, 16, etc. For example, if a
parameter is actually sampled at 5 samples a second, it would be transmitted at 8 samples
a second. This results in replicating the previous value of the parameter. This heuristic
clearly violates (*), our objective. We will see later, in the section on Telemetry Trees,
why this rule has evolved.
Both the tradition of fixed lengths and the power-of-two rule waste space. The normal way
of fitting a parameter of a length other than the fixed word length is to blank fill to the end
of the word. Thus, a 12 bit sample in an 8 bit format wastes 4 bits. The power-of-two rule
wastes space simply by sending parameters more often then they are sampled. The
Telemetry Tiling algorithm presented in this paper disregards these traditions and
heuristics and simply concentrates on meeting (*).
TELEMETRY TREES
Telemetry Trees are what provide the low complexity of the Telemetry Tiling algorithm.
Telemetry Trees are defined and then related to telemetry formats. A Telemetry Tree is a
tree that has nodes labeled with integers and meets the following criteria:
If the top node of a Telemetry Tree is labeled M then we say that the Telemetry Tree is of
order M .
30
6 6 6 6 6
3 3 3 3 2 2 2 2 2 2
The labels of a Telemetry Tree correspond to parameter sample rates. The following
algorithm describes how to generate a Telemetry Tree sample set (or simply sample set), a
set of sample rates which will correspond to a set of parameters.
1. Select (circle) a node N which has not been crossed out or circled.
2. Cross out all descendants and ancestors of N .
3. Repeat Steps 1 and 2 until all nodes have been circled or crossed out.
4. The set of circled nodes is a Telemetry Tree sample set.
30
6 6 6 6 6
3 3 3 3 2 2 2 2 2 2
Circle the left most 6. Cross out the descendants - the two 3’s. Cross out the ancestors -
the 30. Select the first 3 under the second 6. Cross out the 6. Select the third 6. Cross out
its three descendants. Select the sibling 2’s under the fourth 6. Cross out the fourth 6.
Select the last 6. Now circle the second 3 under the second 6. This gives a sample set of
{2,2,2,3,3,6,6,6}. Another example would be to select the 30 in which case all other nodes
(all of which are descendants) would be crossed out and you would be done.
We now show how a sample set relates to a telemetry format. In doing so we will assume
that all parameters have the same word size. This allows us to state the format design
problem in terms of tiling the integers where each sample of each parameter is represented
by an integer. This is analogous to parameter samples being sent at a specific time. A
telemetry frame of order M is the set of integers {0,..., m − 1} . A telemetry tile is a set of
integers of the form t = {s, s + k , s + 2 k ,..., s + nk} where s , k , and n are integers. A
telemetry tiling, T, of a telemetry frame F is a set of telemetry tiles such that no two
elements of any two tiles are the same integer. Further, all integers of, and only those
integers of, F are covered by elements of the tiles. Formally, T = {ti }i =1,..., j such that
ti I tk = ∅, i ≠ k and F = U ti where each t i is a telemetry tile. A telemetry tile represents a
i =1,..., j
parameter and the elements of a telemetry tile correspond to the samples of a parameter. A
telemetry tiling thus represents a telemetry format for a completely filled telemetry frame.
In the following algorithm it is assumed that both the Telemetry Tree and the telemetry
frame are of order M . We will say that a node labeled n consists of n samples.
1. Associate the M samples of the top node of the tree with the M integers of the
frame.
2. Recursively associate samples of children nodes. The first sample of the first
child is associated with the first sample of the parent. The first sample of the
second child is associated with the second sample of the parent. Continue this
way for all first samples of the children. Now start associating the second
samples of each child. Thus, if there are n children then the second sample of the
i th child is associated with the n + ith sample of the parent. Continue with the other
samples of the children so that, in general, the k th sample of the i th child is
associated with the kn + i th sample of the parent.
Using the above Telemetry Tree of order 30 this algorithm associates the six samples of
the first 6 with the tile {0, 5, 10, 15, 20, 25}. The second 6 (crossed out) is associated with
the tile {1, 6, 11, 16, 21, 26}; by recursion, its first 3-child gets the tile {1, 11, 21} and its
second 3-child gets {6, 16, 26}. The third 6 gets {2, 7, 12, 17, 22, 27}. The fourth 6
(crossed out) gets {3, 8, 13, 18, 23, 28}; recursively its 2-children get, respectively, {3,
18}, {8, 23}, and {13, 28}. The fifth 6 gets {4, 9, 14, 19, 24, and 29}.
{0,5,10,15,20,25} {2,7,12,17,22,27} {4,9,14,19,24,29}
30
6 6 6 6 6
3 3 3 3 2 2 2 2 2 2
Using this algorithm, a Telemetry Tree sample set corresponds to a telemetry tiling. By
crossing out the ancestors and descendants, the Telemetry Tree Sample Set Algorithm
guarantees that no two samples occur at the same time. The tiling is complete, satisfying
(*). This is guaranteed by the fact that the algorithm continues until all nodes are circled or
crossed out.
Not every telemetry tiling is also a Telemetry Tree sample set.i It can be shown, however,
that for all primes p , every tiling for telemetry frames of order M = p n is a Telemetry Tree
Sample Set. Further, it can be shown that there is only one maximal Telemetry Treeii of
order p n . In general, the number of maximal Telemetry Trees of order M, where M ≠ p n for
some prime p, appears to grow exponentially. This explains the rationale behind the
development of the power-of-two heuristic. By making everything a power of 2 the
number of Telemetry Trees to search is exponentially decreased and all possible telemetry
tilings can be manageably generated.
We have now laid the foundation for the automatic telemetry format generation algorithm.
Recall that the objective is to take a given set of parameters and map them into a telemetry
frame.
Inputs:
1. A telemetry frame map, or DCM, i.e. when each sample of each parameter will
be sent.
2. When to actually sample each parameter.
The Algorithm:
We provide the algorithm in outline and then discuss each step in detail.
1. Find a Telemetry tree which can include all the sample rates (this does not have
to be minimal).
2. Use the Telemetry Tree-Telemetry Stream Association Algorithm to establish
positions for each sample of each parameter in a telemetry frame of the
associated order.
3. If the frame size f is greater than or equal to the order of the Telemetry Tree
times the largest word size then expand the samples to their word size, time
merge the frames and stop; otherwise continue.
4. Collapse the sample positions by removing unused slots.
5. Expand the samples to their word sizes.
6. Calculate the actual starting bit positions for each sample. If necessary, resort the
ordering of the sample positions based on these starting positions.
7. Fit the samples into the actual frame by adding bits where a sample is being sent
before it is sampled.
Step 1. Find a Telemetry tree which can include all the sample rates. This is not difficult.
Find a common multiple of the sample rates. Use this as the label on the second level of a
Telemetry Tree. Make enough copies of this node to accommodate all the sample rates.
For example, assume you have one hundred 7’s, two 6’s, and fourteen 2’s as sample rates.
A common multiple is 210. You need four second level nodes to accommodate the 7’s and
one second level node each for the 6’s and the 2’s. This gives you a Telemetry Tree of
order 1860. Note that this is only one method for finding such a Telemetry Tree.iii
Step 3. If the frame size f is greater than or equal to the order of the Telemetry Tree M
times the largest word size Max(W) then, instead of considering samples as single integers,
expand the samples to their word size, time merge the frames and stop; otherwise
continue. This step says that if you have a fast enough transmission rate then the (*)
criterion can be satisfied; otherwise a compromise is needed.
Step 4. Collapse the sample positions by removing unused slots. This says that we now
throw away the Telemetry Tree and simply use the position calculations to establish an
ordering on the samples.
Step 5. Expand the samples to their word sizes. We are now ready to go from the
theoretical restriction of a fixed word size used by the Telemetry Tree to the reality of
mixed word sizes. That is, expand the positions from a position per sample to include the
actual bits involved.
Step 6. Calculate the actual starting bit positions for each sample. If necessary, resort the
ordering of the sample positions based on these starting positions. The first sample of each
parameter now has a well defined start time. Use this and the sample rates of the
parameters to determine the time each sample will actually be made. Interestingly, it is
possible that the theoretical ordering of the samples does not correspond to the actual
ordering after the positions are collapsed. Thus it may be necessary to resort the positions
at this point. (This is illustrated in the example below).
Step 7. Fit the samples into the actual frame by adding bits where a sample is being sent
before it is sampled. This just finalizes everything to make sure we have something to send
when we want to send it.
An Example
Let the parameters be P = {p1, p2, p3}, the sample rates be S = {5, 3, 2} in samples per
frame (spf), and the word sizes be W = {3, 4, 5}. Step 1: A common multiple is 30. Thus
the Telemetry Tree is of order 90 and has three second level nodes labeled 30. Step 2: The
association algorithm puts the samples of p1 at positions 0, 18, 36, 54, and 72; the samples
of p2 at positions 1, 31, and 62; and the samples of p3 at positions 2 and 47. Step 3: If
f = 90 * 5 = 450 then we can expand all slots to 5 bits and terminate the algorithm. This
provides a sparse telemetry stream but satisfies the (*) criterion. Step 4: Assume
f = 45 spf. Collapsing the positions gives us an ordering of p1 p2 p3 p1 p2 p1 p3 p1 p2 p1 . Step 5:
Expanding the bits gives a 37 bit pattern with the starting bit for each sample in the
ordering {0, 3, 7, 12, 15, 19, 21, 26, 29, 33}. Step 6: The first sample of p1 is made at time
0 and the samples need to be spaced at every 9 bits (1/5 of a frame). Similarly, p2 starts at
bit 3 and needs to be spaced every 15 bits and p3 starts at bit 7 and needs to be spaced
every 22 ½ bits. Thus, the actual starting bit positions for the samples are {0, 3, 7, 9, 18,
18, 30, 27, 33, 36}. Note that two of the samples (the 30 and 27) are now out of order.
Switching these gives an ordering of p1 p2 p3 p1 p2 p1 p1 p3 p2 p1 . Step 7: Adding bits where
needed we get the final bit positions of {0, 3, 7, 12, (3 bits not used), 18, 22, (2 bits not
used), 27, 30, 35, 39, (3 bits not used)}.
Consider that five out of ten samples are sent when they are sampled and, in the worst
case, a sample is sent 3 bits late.v Further, using the power-of-two rule and the fixed word
size heuristics the frame length would have been 120 bits (8 minor frames [rows] to
accommodate the 5 spf parameter, 3 words [columns] per minor frame, 15 bits per minor
frame). Everything would have been sent on time, but the bandwidth required was more
than doubled.
Comments
Perhaps the most significant aspect of this algorithm is that, given enough bandwidth, the
(*) criterion can be met. However, given the unlikelihood of having the required
transmission rates, the Telemetry-Tiling Algorithm still achieves the highly desirable goal
of wasting no bits. That is, the only bits not used are those required while waiting for a
sample to send.
The other significant aspect of this algorithm is that it is fast. The Telemetry Tree allows
the position calculations to be done in O( n ) as shown in Step 2. Otherwise the processes
are mostly simple sequential (or in the case of Step 7, cosequential) passes through a list.
Thus, the hardest computation throughout is sorting, an operation known to be O( n log n ) .
Frame Synchronization Words: Frame and subframe synchronization words are essentially
a confidence factor (although one is needed to start everything). An ideal telemetry system
(at both ends) would only need to establish initial synchronization and count bits
thereafter. Since this is never the case, frame and subframe synchronization words and
counters establish confidence that synchronization can be maintained and regained quickly
if lost. These synchronization words may be treated as all other parameters, with their
sample rates expressing confidence in the telemetry systemvi and maximal signal loss
requirements.
Fixed Word Sizes: If it is necessary to treat everything with fixed word sizes, the
algorithm can still be used by simply fitting everything into the maximum word length or
by splitting parameters into 2 or more parameters. This alternative does, of course, waste
bits.
Weird Sample Rates: If test or other requirements drive unusual sample rates, especially
rates that are prime numbers, the order of the Telemetry Tree and the size of the resulting
telemetry frame will be unduly influenced. This might be grounds to violate the (*)
criterion and modify the sample rate.
CONCLUSIONS
We have shown that it is possible to design telemetry mappings with no wasted bits. We
have also demonstrated an algorithm for sending samples when they are sampled,
satisfying the (*) criterion. However, since satisfying the (*) criterion could take very large
transmission rates, we must sometimes be satisfied with mappings which minimize delay
time. The Telemetry Tiling Algorithm shows promise for doing so. Further research into
optimal forms of the algorithm is needed. Finally, we have illustrated that there are
algorithms which can intrinsically define telemetry mappings. That is, given a specific set
of inputs, a unique mapping can be generated by anyone who has the inputs and the
algorithm. This eliminates all manual effort in generating and all overhead in propagating
the mappings.
Given that algorithms exist for automating telemetry format generation, the question is now
whether or not the telemetering community can overcome the inertia for existing heuristics
like fixed word lengths and the power-of-two rule. Will telemetry frame design succumb to
the QWERTY keyboard syndrome? The modern keyboard was designed explicitly to slow
down the user to accommodate the mechanical technology in the late 1800’s and the cost
in both hardware and training to change this is prohibitive. Almost all telemetry encoders
and decoders are entrenched in the fixed word length scenario. Can the telemetry
community take the drastic step necessary to say that significant increases in the efficiency
of telemetry mappings outweigh the cost of hardware conversion?
i
Dr. Jones has generated Jones Trees of various orders via computer to discover this fact.
It would be useful to find a characterization of all telemetry tilings.
ii
A maximal Jones Tree has a maximal number of nodes and every node has a maximal
label.
iii
The authors are continuing to explore methods for finding the ‘minimum’ Jones Tree to
accomplish this task.
iv
A little algebra reduces this nicely. Let M be the order of the tree. Thus, M is the number
M
of samples on Level 0 of the tree. Let ni , i = 0,..., − 1 , be the number of samples on node
n0
M
i of Level 1 of the tree. Note that ni = n j for all i, j and that is the number of nodes on
n0
Level 1 of the tree. Let f ( i, j ), j = 0,..., n0 − 1 , be the integer assigned to the j th sample of
M M n
node i of Level 1 of the tree. Then f ( i, j ) = j + i . Let mi , j , i = 0,..., − 1, j = 0,..., i − 1 , be
n0 n0 mi , 0
the number of samples on the j th child of the i th node on Level 1. That is, the mi , j are the
ni
labels on Level 2 of the tree. Note that mi , j = mi , k for all j , k . Let g( i, j, k ), k = 0,..., − 1 , be
mi , 0
the integer assigned to the k th sample of the j th child of node ni . That is, g( i, j , k ) provides
the desired associations between samples and integers. Then
ni n M
g ( i, j , k ) = f ( i, k + j) = (k i + j) +i.
mi , 0 mi , 0 n0
v
This lends itself to establishing metrics on a telemetry mapping. Specifically, the mean
and standard deviation of how many bits late samples are sent and the overall bit rate
required to send all samples of all parameters would be useful metrics.
vi
This also represents confidence that the test maneuver, atmospheric, and other typically
uncontrollable factors will not impair telemetry reception.
BIBLIOGRAPHY
Samaan, Mouna M. and Cook, Stephen C., “Configuration of Flight Test Telemetry Frame
Formats,” in Proceedings International Telemetering Conference, vol. XXXI, (1995), pp.
643-650.
Strock, O.J., Telemetry Computer Systems: The New Generation, Instrumentation Society
of America, Research Triangle Park, NC (1988).
ACTS PROPAGATION EXPERIMENT AND
SOLAR/LUNAR INTRUSIONS
Christopher S. Gardner
Klipsch School of Electrical and Computer Engr.
New Mexico State University
Las Cruces, NM 88005
ABSTRACT
In this paper are described the effects that solar and lunar intrusions have on statistical
analysis of the data. The NASA ACTS experiment focuses on the 20 and 27 GHz
radiometer and beacon. The experiment is currently compiling a database for the
attenuation for these different channels. For the year of 1994 our sight obtained 86.5 hours
of attenuation and for 1995 our sight obtained 77 hours of attenuation. The total amount of
interference time for sun/lunar intrusions for 1994 and 1995 was respectively, 39 hours
and 38.5 hours, which is nearly half the total amount of attenuation due to rain and cloud
fades. It is clear to see why this data must be taken out for any type of statistical analysis
of the data.
KEY WORDS
INTRODUCTION
NASA ACTS propagation experiment’s main goal is to find a relationship between the
noise ratio and attenuation of a signal in the 20 and 27 Gigahertz channels, Another main
objective is to build up a propagation database and be able to compare to it to such models
as CCIR, CRANE Global, etc... This is done by recording data on the 20 and 27 GHz
channels at intervals of I/sec beacon , 1/sec sky temperature (radiometer), and 1/6 sec rain
gauge recordings.
The data is processed by NMSU using ACTS software. The program produces an output
file which is used for recording keeping and for further analysis of the rain data. Also a
record is kept of the plots for the sky temperature and of the signal attenuation plot (for
both channels)
PROBLEM STATEMENT AND SOLUTION
The original protocol for processing the data produced in the experiment did not account
for sun outages and lunar intrusions. The processing protocol is described in [2]. A
solar/lunar intrusion is noticeable when the sun or moon interference is within one degree
of ACTS position on the sky. The interference effects the radiometer signal in both the 20
and 27 Gigahertz channels.
Intrusions have a noticeable pattern unlike attenuation caused by rain which is sporadic
and has no true shape. Therefore, an intrusion is easy to notice on a plot of the signal,
consequently the bad data can easily be removed. This way of finding solar/lunar
intrusions events is time consuming susceptible to human error. Although, the error can be
corrected though using simple prediction techniques of determining exactly when the event
will occur.
The prediction of when an event will occur depends on the relative position of the
sun/moon in reference to the incoming signal as seen from the ground base receiving
station. Sun intrusions occur twice a year for about a period of a week, which is due to the
earth’s tilt near the equinoxes. A moon intrusion is due to its rotation around the earth and
what side it showing to the receiver during the critical angles. For example, a new moon
will give off no radiation, and will not affect the signal. On the other hand, if the moon is in
its full moon stage, the maximum amount of interference will be seen in the signal. See
Figure 1 for an example of a solar intrusion.
In order to know when these intrusions will occur NMSU created prediction software[3].
The program was designed such that it tells you at what time the maximum angle will
occur. This can be used to predict either Sun or moon intrusions, which can then be used
to determine the maximum amount of interference which will occur. The programs are
based on standard astronomical formulas [4] and validated against known positions [1].
An interference pattern for a sun intrusion is greater than that of a moon intrusion. The
effects from the average sun intrusion can be seen from about an hour before and after the
maximum predicted time for when the event will occur. Where the effects of a moon
intrusion can be seen about thirty minutes before and after its maximum predicted time.
If this data is not removed it will interfere with the data statistics of rain attenuation has
actually occurred. This is because a sun of moon intrusion appears to the processing
software as an increase in sky temperature which is interpreted by the software as a cloud
fade. This is even more harmful to our location, because being in a desert climate the total
amount of rain fall is between 12 to 16 inches per year. Meaning that there is only a small
amount of time in which we can collect any data on fading events for our database. So any
interference such as solar/lunar intrusion will have a significant effect on our total amount
of attenuation if it is allowed to remain in our statistics.
As shown in tables 1 & 2, by comparison it is clear to see that if a sun or moon intrusion is
allowed to remain as valid data, it would give a totally inaccurate period of the total
amount of rain attenuation. Table 1 is the raw data of how many minutes have been lost
due to solar/lunar intrusions since the experiment began. We found the total amount of
time lost to the interference was 39 hours in 1994 and 38.5 hours in 1995. Table 2 is the
total amount of attenuation due to rain and cloud fades. We found the total rain attenuation
to be 54.5 hours for 1994 and 36 hours for 1995. The total amount of attenuation for cloud
and rain fades for 1994 and 1995 respectively were 86.5 hours and 77 hours. The total
amount of interference from the sun is nearly half of the total amount of rain/cloud
attenuation. Therefore, it is obvious that if the solar/lunar intrusions are left in, they would
have a dramatic effect on any statistics ran on the data.
Using the ACTS software the intrusion data is marked as “bad data” and removed for the
given event. Noting that we must allow for even the smallest attenuation, the data must be
marked bad or removed well before the effects of the event occurs. The final output file,
for record keeping, is absent any solar/lunar event that has occurred and then is stored for
our database.
The amount of time lost to solar and lunar intrusions is significant enough to worry about
in any study wanting an accurate and correct representation of data. Even sites such as
Florida must worry about such type of interference. Since Florida is a tropical climate, the
total amount of rain attenuation is much greater than the total amount of outage time due to
solar/lunar intrusions. Even though this is true, if you ignore the solar/lunar outage times
then you are not obtaining an accurate representation of your data, therefore this data
should not be used as any type of database.
TABLES
Table 1
Year Total Total Total time
Lunar Sun
1994 11 hours 28 hours 39 hours
1995 10.5 hours 28 hours 38.5 hours
Table 2
Quarter Total rain time Total Cloud Fade Time Total Attenuation Time
Dec 93 - Feb 94 0 hours 11 hours 11 hours
Mar 94- May 94 23 hours 6 hours 29 hours
June 94 -Aug 94 22.5 hours No Data 22.5 hours
Sep 94 - Nov 94 9 hours 15 hours 24 hours
Dec 95 - Feb 95 4 hours 7 hours 11 hours
Mar 95- May 95 0 hours 4 hours 4 hours
June 95- Aug 95 21 hours 9 hours 30 hours
Sep 95 -Nov 95 11 hours 21 hours 32 hours
Total 90.5 hours 73 hours 163.5 hours
CONCLUSION
Solar and lunar intrusions play a major role in determining when and what you can do with
your antenna. With any moveable ground base receiver it is very important that you take in
to account that you can not trust any data obtained in which it contains interference from
the sun/moon. In large receivers there is the dangerous possibility of literally frying the
components of the receiver.
In conclusion this paper shows the significance of solar/lunar intrusions on data, the need
to remove the data, and also how it ruins statistical analysis of the data. A better
understanding of the original experiment can now be made through the prediction and
removal of solar and lunar events.
REFERENCES
[1] Nautical Almanac Office, The Astronomical Almanac for 1990, Washington:
Government Printing Office, 1989.
[2] Feldhake, G.S., Ka Band Satellite Propagation Characteristics Using ACTS Las
Cruces: NMSU, 1995.
[3] Horan, S., Acts Propagation Experiment Program Sun an Moon Intrusion
Predictions, Las Cruces: NMSU, 1995.
[4] Mecus, J., Astronomical Formulae for Calculators, 4th ed., Richmond: Willmann-
Bell, 1988.
FIGURES
Figure 1. A solar intrusion event sequence. Lunar intrusions appear similar to the 10/03/94
solar intrusion event.
300 MBPS CCSDS PROCESSING USING FPGA’s
Thad J. Genrich
ABSTRACT
This paper describes a 300 Mega Bit Per Second (MBPS) Front End Processor (FEP)
prototype completed in early 1993. The FEP implements a patent pending parallel frame
synchronizer (frame sync) design in 12 Actel 1240 Field Programmable Gate Arrays
(FPGA’s). The FEP also provides (255,223) Reed-Solomon (RS) decoding and a High
Performance Parallel Interface (HIPPI) output interface.
The recent introduction of large RAM based FPGA’s allows greater high speed data
processing integration and flexibility to be achieved. A proposed FEP implementation
based on Altera 10K50 FPGA’s is described. This design can be implemented on a single
slot 6U VME module, and includes a PCI Mezzanine Card (PMC) for a commercial Fibre
Channel or Asynchronous Transfer Mode (ATM) output interface module. Concepts for
implementation of (255,223) RS and Landsat 7 Bose-Chaudhuri-Hocquenghem (BCH)
decoding in FPGA’s are also presented.
The paper concludes with a summary of the advantages of high speed data processing in
FPGA’s over Application Specific Integrated Circuit (ASIC) based approaches. Other
potential data processing applications are also discussed.
KEY WORDS
INTRODUCTION
In late 1992, Hughes funded a 6 month research and development project to demonstrated
the feasibility of 300 MBPS Consultative Committee for Space Data Systems (CCSDS)
telemetry data processing. The resulting FEP prototype provides frame sync, RS error
correction, and HIPPI output interface functions for 300 MBPS CCSDS processing. A
patent pending frame sync design was implemented along with other data formatting
functions in twelve 4000 equivalent gate FPGA’s.
Altera has recently introduced 50,000 and 100,000 equivalent gate FPGA’s. One of the
50,000 gate devices is capable of implementing all of the FEP frame sync and data
formatting functions. The capabilities of these devices also make them suitable for other
types of high data rate processing functions.
Switches and storage devices supporting more efficient high data rate standards, such as
Fibre Channel and ATM are now available. Interfaces supporting these standards are
beginning to become available as PCI Mezzanine Cards (PMC’s). Backplane crosspoint
switch modules supporting the RACEway standard are also available from Mercury
Computer Systems as standard products. This interface allows communication between
VME modules in the same chassis at rates up to 1280 MBPS.
These new developments allow implementation of a 300 MBPS FEP on a single slot 6U
VME module. They also provide the power to perform higher rate processing, the
flexibility to implement other data processing algorithms, the communication capabilities
to interface to standard high data rate networks, and the ability to perform multiple
algorithms in the same chassis.
The frame sync was required to synchronize to all combinations of true/inverse and
forward/reverse data, and output true forward code blocks to the RS decoders. A fixed
codeblock interleave of 5 was specified, which resulted in a 1275 byte frame, not including
the 4 byte synchronization pattern. A standard search/check/lock/flywheel algorithm was
specified, with two separate selections for 0 to 31 tolerated synchronization pattern errors
in search/check and lock/flywheel states. Two separate selections of 0 to 15 consecutive
synchronization patterns were specified for transition from check to lock (received) and
flywheel to search (missed). A selection of 0 to ±3 allowable bits between consecutive
synchronization patterns was also required. Selectable derandomization per the CCSDS
standard was also specified.
Time stamping capability was also required to track the time of arrival of each frame. A
HIPPI output interface was specified because it was the only standard supported by
commercial computer interfaces in 1992 that allowed throughput of the 300 MBPS data.
FEP PROTOTYPE IMPLEMENTATION
To eliminate additional connectors and reduce overall design effort, the FEP prototype is
implemented on a single 16” x 16” 10 layer Printed Circuit Board (PCB). The design
utilizes 117 integrated circuits which were all commercially available in late 1992. A block
diagram of the FEP is shown in Figure 1.
The heart of the design is a patent pending frame sync design. The design utilizes data that
is demultiplexed (converted from a serial to parallel format) to greatly reduce the
processing clock rate. In the case of the FEP, a 1 to 8 demultiplexer (demux) is used to
reduce the serial input clock rate of 300 MHz maximum to an 8 bit parallel processing rate
of 37.5 MHz maximum. The demux front end functions are implemented using Motorola
100K ECLInPS logic.
There are several difficulties associated with the demultiplexed frame sync approach,
which were all successfully addressed in the FEP prototype design. Since input data is
processed 8 bits at a time, 8 times as many correlators must be used to detect the frame
sync pattern, each examining one of the 8 possible input bit shifts. The frame sync must
include logic to arbitrate the possibility of multiple simultaneous pattern recognitions.
Frame boundaries must be defined by both clock cycle counts and bit shifts. Input data
must be pipelined to compensate for the delays through the correlators and frame sync
algorithm. Output data must be shifted for proper alignment when a frame boundary does
not correspond to a demultiplexed byte boundary.
The FEP frame sync utilizes 8 correlators to search for both forward true and forward
inverted frame sync patterns. Only 8 correlators are required, since bits in error relative to
a true reference pattern are correct relative to the inverse of the pattern. Since the FEP
correlators are 32 bits long, the number of errors relative to the inverse pattern is 32 minus
the number of errors relative to the true pattern. The FEP also includes 8 additional
correlators to search for reverse true and reverse inverted frames sync patterns. Two
correlators and associated subtraction/comparison logic are implemented in each of 8
FPGA’s. Another FPGA is used to select one of the two possible error threshold values
and format the result for output to the correlator FPGA’s.
A dual port RAM with it’s input and output addressing controlled by another FPGA is
used for the data buffering function. The input port address increments on each clock cycle
to store new input data words. When a forward frame sync pattern is recognized, the
current value of the RAM input address is latched. An offset representing the delay of the
correlator and the frame sync is subtracted to obtain a starting output address. When a
reverse frame sync pattern is recognized, a similar process is used to generate a starting
address, and the data is read out of the RAM in a reverse order.
FIGURE 1 FEP BLOCK DIAGRAM
CLOCK DATA
1:8
DEMUX
DECODE/
DATA OUTPUT
FORMAT CONTROL
RS TIME HIPPI
DECODERS STAMP OUTPUT
OUTPUT
A data formatting FPGA performs bit shift and inversion functions on RAM output data. It
also implements a selectable derandomization function.
Another FPGA implements all the remaining frame sync functions. This includes
check/lock duration and bit slip functions, as well as interfaces to the other frame sync
FPGA’s.
The FEP RS decoder is implemented with 5 Advanced Hardware Architecture 4600 chip
sets. These devices are now obsolete because they are based on a wafer fabrication
process that is no longer supported.
The time stamp logic accepts a 54 bit parallel Binary Coded Decimal (BCD) time code
from a 60 pin ribbon cable input and appends it to each received frame prior to output.
This input is compatible with the Odetics AITG-PC IBM Personal Computer (PC)
compatible time code generator module parallel output port.
A transmit only HIPPI interface is used for data output. This interface is implemented with
an AMCC S2020 HIPPI source device.
The test results showed that the FEP reliably meets all specifications at data rates from 1
MBPS to 350 MBPS.
FUSIONTM MODULE
A single slot width 6U VME module is being developed which is capable of supporting the
FEP or a variety of other signal and data processing applications. The module is called
FUSION to indicate the breadth of application capability available in a single module. A
block diagram of the FUSION module is shown in Figure 2.
The heart of the FUSION module is four Altera FLEX 10K RAM based FPGA’s. These
devices are connected in a mesh arrangement that allows maximum board level routing
flexibility. Since the devices are FPGA’s, the function and direction of each connection is
defined by the particular FPGA configuration. Each FPGA also has an associated dual port
RAM for data storage.
The module has the capability of mounting 2 PMC’s. These modules can be either
standard commercial modules utilizing a full PCI bus, or custom modules compliant with
the PMC mechanical specifications which implements a custom interface. The FPGA
associated with a particular PMC is configured with the appropriate logic interface for that
device.
FIGURE 2 FUSIONTM BLOCK DIAGRAM
CONTROL
DUAL DUAL
PLD/
PORT PORT
FLASH
RAM RAM
MEMORY
DUAL DUAL
PORT PORT
RAM RAM
The VME interface allows software configuration, control, status, and low to moderate
rate data transfers to/from an external VME based computer.
The high data rate I/O interfaces necessary to implement the FEP on a FUSION module
can be provided on 2 PMC’s. One custom PMC will implement the FEP serial to parallel
(demultiplexer) and parallel to serial (multiplexer) test functions utilizing a custom
interface. The second PMC will be a standard commercial Fibre Channel or ATM
interface to a standard PCI bus.
The PMC1 FPGA will be configured to perform the frame sync function. The 300 MBPS
FEP frame sync design utilizes an estimated 29,100 gates of logic. Capability to process
reverse data is unlikely to be required for current systems, since solid state memories allow
data playback in the forward direction. Elimination of this requirement will decrease the
number of true/inverse correlators from 16 to 8, and the FEP frame sync to approximately
18,000 gates. One Altera 10K50 FPGA contains approximately 36,000 gates of
configurable logic apart from RAM. One device will contain a forward only 300 MBPS
frame sync at approximately 50% utilization, or a 300 MBPS forward/reverse frame sync
at approximately 81% utilization.
The VME FPGA will perform error correction (if required) and VME interface functions.
Error correction can also be provided by a NASA RS Error Correction (RSEC) ASIC
mounted on the I/O mezzanine module. The P2 FPGA will be used to provide the TIB test
data generation function. A PCI interface will be implemented in the PMC2 FPGA to
provide frame sync data to the commercial PMC.
600/1200 MBPS FEP FUSIONTM CONCEPTS
Increasing the number of demux output bits requires a corresponding increase in frame
sync logic. Assuming a linear relationship between demux output width and logic
requirements, a 600 MBPS forward frame sync would require approximately 36,000 gates,
and a 1200 MBPS frame sync would require approximately 72,000 gates.
It has been shown that a FUSION module populated with four 36,000 equivalent gate
Altera 10K50’s can process 300 MBPS data. If on board error correction requirements are
reduced through the use of RSEC ASIC’s on the I/O mezzanine, error correction on a
separate FUSION module, or processing of unencoded data, the frame sync function can
be split between the PMC1 and VME FPGA’s. This would allow implementation of a 600
MBPS FEP on a 10K50 based FUSION module.
Processing of 1200 MBPS and higher rates can be implemented more efficiently using the
62,000 equivalent gate 10K100 FPGA. Two of these devices can readily contain the
estimated 72,000 gate frame sync logic. Assuming no on board FPGA based error
correction, this data rate can be supported on a 10K100 based FUSION module.
A CCSDS RS decoder operating at 150 MBPS can be hosted in a single FPGA. The
design would implement the required syndrome generation, error location polynomial
calculation, Chien search, and error value/correction algorithms. Logic requirements for
the design are estimated to be 46,813 gates, or 75.5% of the available gates in a 10K100
FPGA.
A 300 MBPS Landsat 7 BCH decoder can also be implemented in a single FPGA. The
demultiplexed format requires the implementation of 8 parallel syndrome generation, error
location polynomial calculation, and Chien search/error correction algorithms. Logic
requirements for this design are estimated to be 26,550 gates, which is 73.8% utilization of
a 10K50, or 42.8% utilization of a 10K100.
FPGA’s are particularly well suited to implementing high speed, repetitive, integer based
algorithms. Some algorithms which could be implemented with the FUSION module
include lossless (Rice) decompression, data manipulation algorithms (e.g. windowing), and
other data processing algorithms (e.g. averaging, limiting, filtering).
A chassis could be configured with one or more FUSION modules to perform frame sync
and error correction functions. Several additional FUSION modules would be included in
the chassis to provide configurable data processing and fixed output (PMC) functions.
These modules would receive their input data over a backplane RACEway interface. This
subsystem would be capable of implementing a variety of algorithms on command, with a
flexible allocation of algorithms to particular modules and output interfaces.
There are a number of advantages to the use of FPGA’s on a flexible platform such as
FUSION over the common practice of implementing custom ASIC’s on custom modules.
Non-recurring development costs for custom chips and modules are eliminated. The non-
recurring checking, mask, and test set-up costs for a custom gate array can be well over
$100K. Any errors found after chip or board fabrication will require another set of non-
recurring charges to correct. A RAM based FPGA has no non-recurring charges or risk of
repeated charges.
Custom chips and modules often require additional non-recurring hardware related
expenditures to adapt to new requirements or correct system level incompatibilities.
Modification of RAM based FPGA design requires minimal non-recurring hardware costs.
The recurring cost of an ASIC implementation can be high due to large minimum purchase
quantities from an ASIC foundry. These quantities can be in the hundreds of devices,
potentially creating a situation where a hundred devices must be purchased for each one
used. Since FPGA’s are commercial devices used for a wide variety of applications, they
can easily be purchased in small quantities.
The FUSION module approach also provides sparing and maintenance advantages over a
custom module approach. When separate FUSION modules are used to perform different
functions in a unit or system, only one module type need be spared and maintained.
Another advantage of the FUSION module when used in conjunction with RACEway is
the ability to increase the functionality of a fielded unit without extensive re-work. By
initially providing spare VME slots and RACEway backplane compatibility in a unit,
additional functions can be provided through installation of more FUSION modules which
will communicate with existing functions using the RACEway interconnect.
CONCLUSIONS
FPGA’s can be used to efficiently implement a wide variety of high speed data processing
functions, including frame sync, error correction, and data manipulation. The FUSION
module is a flexible and low cost approach to implementing these functions.
ACKNOWLEDGMENTS
Bill Browning & Tony Calio managers responsible for FEP prototype funding
Ken Kucharyson author of FEP prototype specification
Rick Davis & Mark Hall co-inventors of the frame sync design
Ron Lewis & Tim Schiefelbein developers of FUSION module
NATIONAL GUARD DATA RELAY
AND
THE LAV SENSOR SYSTEM
June Defibaugh
DESA/S&P
2251 Wyoming Blvd., SE
Kirtland AFB, NM 87117-5609, USA
[email protected]
And
Norman Anderson
DESA/BALL
2251 Wyoming Blvd., SE
Kirtland AFB, NM 87117-5609, USA
[email protected]
ABSTRACT
Multi-spectral imaging, Image Relay, Data Relay, COTS, NGB, DESA, MSTAR, LORIS,
Drug Trafficking.
INTRODUCTION
In support of its counterdrug mission the NGB needed a sensor suite to be used for the
detection, classification, and identification of ground targets on foot or in vehicles without
the need for highly trained operators. The LSS was designed primarily for deployment in
high density drug trafficking areas along the northern and southern borders using
commercial-off-the-shelf and government-off-the-shelf equipment for all primary
subsystems. The LSS sensors include a ground surveillance pulse doppler Radar, a long-
wave infrared (IR) camera, and a visible light color camera.
The Radar system, a Manportable Surveillance and Target Acquisition Radar (MSTAR),
can detect moving targets on the ground at distances greater than 24 km. The IR camera,
an Inframetrics IRTV-445L Long Range Infrared System (LORIS) can detect human- or
vehicle-sized heat sources at distances greater than 10km. The visible light sensor is a
commercially available charge-coupled device (CCD) camera with a remotely controlled
28 power telephoto lens. It has a 1 lux sensitivity. The visible light camera is mounted with
the LORIS on a common pan/tilt head assembly.
To reduce operator workload and support data relay, a pair of industrial grade, Intel 486-
based computer systems connect to the sensors and provide a number of features. One
computer is used to retrieve Radar target tracks and automatically overlays them on a
commercially available map display application. Once the sensors are registered on the
system map, the operator can designate targets on this display and the coordinates are
automatically translated into pointing instructions for the camera system. These
instructions are passed to the second computer, which commands the cameras to point at
the same geographic location designated by the operator. These locations could be Radar
targets or any other feature of interest. This process is used to aid in the target
identification and classification responsibilities of the National Guard units. These
computers also allow independent, manual operation of the sensor systems through a
virtual control panel. Once information of interest is identified by the system operators, the
computer systems digitally capture the images and Radar tracks, which can be relayed to a
command post for further dissemination.
The radio links for the remotely controlled cameras use two separate commercially
available subsystems; an S-band video link and a spread spectrum transceiver. The video
link is channellized in 1 MHz steps over the 2.2-2.3 GHz band. It accommodates a
National Television System Committee (NTSC) color video signal and two 5 KHz-wide
audio tracks. The spread spectrum transceiver is a direct sequence spread spectrum
system that operates at up to 64Kbps using a synchronous protocol or 19.2Kbps using an
asynchronous protocol. This link is currently operating at 9.6Kbps to accommodate data
transfer limitations in the camera motion controller. The frequency range of this system is
902-928 MHz. These two RF links have adjustable transmit power levels and
accommodate up to a five kilometer long link between the cameras and the control site.
The data relay system is based on a commercial direct sequence spread spectrum
networking transceiver operating over the 902-928MHz range. It is connected to the two
computer systems by a commercially available multiplexer that combines two bi-
directional 9.6KBps asynchronous data channels and two full duplex voice channels onto
the 64Kbps digital link. The data channels support relay of images and Radar tracks while
the voice links are used for operations control. The data relay system has adjustable
transmit power levels and accommodates up to a 30 kilometer long link between the
control site and the command post. Greater ranges are possible by replacing the 5-element
Yagis with higher gain antennas.
The current configuration of the system allows relay of 1-2 high resolution digitized
images per minute, and updating of the Radar target map in near real time. An upgrade of
this capability has been demonstrated using commercially available video teleconferencing
software to replace the manual image and map capture and transfer function. This upgrade
allows updates of the map once a second, and up to ten frame per second updates on the
video imagery, all with no operator intervention. This upgrade also allows greater
interaction between the command post and the control location, and even supports manual
sensor operation directly from the command post.
Hypothetical scenario: National Guardsmen pack all need equipment into the back of a
vehicle and head to a mission location. They drop off the necessary equipment at the radar
and thermal imager/color camera sites which are located about a mile apart along a
ridgeline. They park the vehicle behind the ridgeline about 1000 feet from the radar site,
and then participate in setting up the equipment for an operation. Setup takes about a half
hour.
A Guardsmen takes a handheld GPS unit to the radar site, where he sets up the Radar on
its tripod. The guardsmen is careful to level the tripod, and uses a lensatic compass to
determine the initial pointing (opening bearing) of the Radar dish. He uses the handheld
GPS system to determine the latitude and longitude for the radar. After connecting the
battery to the radar, he connects the thin cable back to the Radar, and unreels the cable to
the operation site. At the operation site, he connects the other end of the cable to the radar
control display.
Another guardsmen go to the thermal imager/color camera site and set both sensors up on
one tripod. The tripod is carefully leveled, and a lensatic compass is used to determine
initial pointing. The handheld GPS is used once again to determine the latitude and
longitude for the thermal imager and color camera. The tripod the antennas is set up and
the antennas attached. The guardsmen connect the cable between the control case battery,
antennas, and sensor. After connecting the battery to the power connection, they point the
antennas in the direction of the operation site and rotate the antennas until peak signal is
obtained. The guardsmen return to the operations site.
Back at the operations site, the GPS coordinates taken at each of the remote sensor sites
are entered into the GPS/mapping computer. Sensor control is checked by sending
commands from the computer to the sensors. The radar is checked to see that it is
operating properly. A guardsmen checks to see that video is being received from the
imaging sensor. Finally, since in this operation, information is being sent to a remote
command center via a spread spectrum link (voice, GPS/map bitmaps, and stillframe video
mages), that link is checked for proper operations. The system is now operational.
CONCLUSION
Field testing of the system prototype in summer of 1995 indicates that the LSS will provide
a significant new data collection and transfer capability to the National Guard in control of
illegal drug traffic across the U.S. borders. Through the use of commercial, off-the-shelf
technology a sophisticated sensor control and data collection capability has been provided
to the illegal drug interdiction process at a very reasonable cost. Additional uses of this
technology are currently under review. They comprise other remote surveillance activities,
including monitoring of protected cultural and archeological sites
REFERENCES
Alan Cameron
Tony Cirineo
Karl Eggertsen
ABSTRACT
The objective of the FIRST project is to define a modern DoD Standard Datalink
capability. This defined capability or standard is to provide a solution to wide variety of
test and training range digital data radio communications problems with a common set of
components, flexible to fit a broad range of applications, yet be affordable in all of them.
This capability is to be specially designed to meet the expanding range distances and data
transmissions rates needed to test modern weapon systems. Presently, the primary focus of
the project is more on software, protocols, design techniques and standards, than on
hardware development. Existing capabilities, on going developments and emerging
technologies are being investigated and will be utilized as appropriate. Modern processing-
intensive communications technology can perform many complex range data
communications tasks effectively, but a large-scale development effort is usually necessary
to exploit it to its full potential. Yet, range communications problems are generally of
limited scope, so different from one another that a communication system applicable to all
of them is not likely to solve any of them well. FIRST will resolve that dilemma by
capitalizing on another feature of modern communications technology: its high degree of
programmability. This can enable custom-tailoring of datalink operation to particular
applications, just as a PC can be tailored to perform a multitude of diverse tasks, through
appropriate selection of software and hardware components.
KEY WORDS
Digital Data Links, Range Applications, Test and Training, Modular Design, Multiple-
Access, Vehicle Control, Status Reporting, TSPI
INTRODUCTION
FIRST is an attempt to solve a wide variety of test and training range communications
problems by defining a standard datalink capability that is flexible to fit a broad range of
applications yet affordable in all of them. Modern processor-based communications
technology is so powerful that it is difficult to implement piecemeal; a large-scale
development effort is necessary to exploit it to its full potential. Yet, range
communications problems are so varied it would seem that a system applicable to all of
them is not likely to solve any of them well. FIRST is attempting to resolve that dilemma
by capitalizing on another feature of modern communications technology: its high degree
of programmability. This would enable custom-tailoring to individual applications, just as a
PC can be tailored to perform a multitude of diverse tasks, through appropriate selection of
software and hardware components.
Based upon the initial design concept, transceivers will be reconfigurable to provide a
variety of rates, link reliabilities, and special capabilities, by varying hardware, software
and firmware. Variations in system-level software configuration would allow different
capabilities to be traded off with one another to satisfy changing demands, so, for example,
maximum capacity would be attainable for one test, minimum latency for another. It would
even be possible to reconfigure installations dynamically during missions, to accommodate
unanticipated and changing communications requirements. System operation would be
highly automated, to allow such modifications to be established at the operational planning
level, and implemented below that level with minimal technical support. In all installations,
cost would be controlled by allowing ranges to procure only those capabilities needed for
their particular applications, and not have to pay for features they do not need.
BACKGROUND
Test and training ranges have many diverse needs for radio-based digital data
communications. Command and control applications generally require very high link
reliability (e.g., command destruct links for range safety). Real-time surveillance of assets
can necessitate a very large number of individual reporting units, as in a large-scale ground
training exercise. Weapons testing involves a small number of units, but with very high
position accuracy and updating rate requirements. Many vehicle control applications
require very low latency (i.e., net delay through the communication system). These needs
have traditionally been addressed through the development of many different digital data
communication systems, each optimized to its own particular application, with few
common components, little opportunity for sharing of equipment among ranges, and unique
electromagnetic compatibility (EMC) and frequency management problems at every
installation. Development of common datalink equipment to serve these needs could
facilitate increased compatibility and interoperability among assets from different ranges,
and provide large cost savings through economies of scale in development, production and
life-cycle support.
The FIRST program was started in 1992 at Naval Air Warfare Center, Weapons Division
(NAWCWPNS), Pt. Mugu, CA, with Central T&E Investment Program (CTEIP) funding,
to investigate the concept of a general-purpose test and training range data link. A survey
of range data link needs led to the conclusion that the major portion of range digital radio
communications needs were amenable to a common system electrical design solution. A
modular electrical and mechanical design approach also appears to enhance the
tailorability of such a system, allowing individual ranges to selectively procure only those
system elements required for their own applications, and therefore not pay for capabilities
they do not require.
Initial system definition studies resulted in the following goals, on which the FIRST
program remains focused:
• The primary goal is affordability. A common datalink must be so affordable for the
various users that it will be difficult to justify not using it.
• To be affordable, a common datalink must be produced in volume. This will occur only
if it can be applied to a wide variety of range communications applications.
• To remain affordable, a common datalink must be tailorable so it need not meet all
requirements simultaneously. Its design must allow the various capabilities to be traded
off against one another in different ways in different installations.
A laboratory test bed transceiver was developed at NAWCWPNS, which verified the
premise that a modular design approach was practical, and demonstrated the capability to
emulate several types of data link transceivers with affordable, off-the-shelf components.
REQUIREMENTS
The recent Requirements Survey led to the following set of capabilities and features
generally needed for many applications by most ranges:
Configuration - Most applications involve multiple mobile users, all transmitting data to a
centralized ground-based control facility, which may employ several ground transceiver
sites. Ground-to-user data transmission is also required, though generally of lower volume,
depending on the application. There are also some requirements for direct user-to-user data
transmission, particularly when users are beyond range of ground facilities. In many
applications, it is necessary to initially configure and occasionally reconfigure individual
transceivers automatically over the datalink during the mission.
Capacity - In terms of net information transmission rate through the system (throughput),
most ranges require between one and four hundred kbps, on the assumption that several
missions are likely to be in progress simultaneously. The maximum required throughput is
around 1 Mbps, based on several simultaneous large-scale operations including drone
control.
Coverage - Ranges generally require coverage of all airspace within their boundaries and
a significant fraction of their surface areas. Coverage of airspace beyond range boundaries
is usually also required to line-of-sight limits (typically 150 nm), extendible to 450-500 nm
through airborne relaying.
Quality - Since multiple-access data transmission systems transfer data in block messages,
transmission quality is usually expressed in terms of message acceptance rate (MAR), the
probability that a block data message is properly received on the first transmission attempt,
and undetected error rate, the probability that a message presented at the system output
contains one or more errors introduced by the system. MAR is a function of link power,
antenna gains, propagation losses, and error-correction coding effectiveness, and generally
must be 0.95 or better. In some applications, MAR in excess of 0.99 is necessary, and in
situations where retransmission (acknowledgment) procedures are appropriate, probability
of eventual successful message delivery must be still higher. Undetected error rate depends
on the error-detection process employed, and must be lower than 10-7 in some applications.
Latency - Delay through the system must be minimized, particularly in vehicle control
applications. The most demanding of these constrains round-trip delay to no more than 50
milliseconds.
Security - The need for COMSEC protection of data transmissions on both uplinks and
downlinks was widely expressed, along with the need to support over-the-air re-keying of
the cryptographical devices providing it. Some applications do not require it, and have
traditionally not employed it due to cost.
Size - Size constraints on transceivers for use in certain vehicles and manpacks were
noted. In many cases, these correlate with less severe coverage requirements, allowing
lower transmitter power.
Prime Power - A wide variety of prime power sources must be accommodated, from 400
Hz aircraft power to battery packs. (Applications requiring batteries can generally tolerate
low RF output power.)
Affordability - Low-cost participant packages: production cost goals are $2.5K for
manpacks and ground vehicles, and $25K for flight-qualified units.
TECHNICAL DESIGN
Initial system design analysis based on the above requirements has identified several key
system characteristics. Since omni-directional antennas must be employed on vehicles for
both cost and coverage reasons, the operating frequency must be as low as possible, to
maximize range. With many government frequency bands presently on the auction block,
and increasing demand for all frequencies low enough to support mobile operations, the
1350-1400 (now 1390) MHz region of the L-band has been identified as the lowest
frequency range in which a common datalink has a high likelihood of receiving an
allocation for long-range airborne operation. This frequency range is allocated specifically
to data transmission systems used in military system test and evaluation. Since it is near
the band in which many range telemetry systems operate, and since a common datalink
would be capable of limited telemetry data transmission, consideration is being given to
extending the transceiver tuning range into the telemetry band (1435 to 1525 MHz) to
provide maximum flexibility in accommodating the system within the crowded spectrum.
This broader tuning range also enables more effective frequency diversity operation, where
required. In some operations where maximum range over the ocean is required,
consideration is being given to the development of a separate transceiver RF module that
would operate at VHF, in the 141 MHz band where some naval data links now operate.
Use of such units would be highly restricted, limited to ocean areas far from any ground-
based systems operating in that frequency range. Additional VHF and UHF bands will also
probably be necessary to meet the constraints of Army ground training applications. A
study is about to be initiated at the Joint Spectrum Center to determine feasible frequency
allocations.
Since exercise geometry must not be constrained by the performance of the data link, and
it must be possible for vehicles to operate in close proximity to one another, time-division
appears to be an appropriate form of multiple-access for a common datalink. (Multinet
operation could be considered a form of hybrid FDMA/TDMA.) Consideration is also
being given to FDMA, CDMA, and other hybrid schemes. Frequency-division currently
does not look promising, since ground stations would be need to copy transmissions from
all users, and this would necessitate large numbers of parallel receivers. Additionally,
code-division does not look promising due to the difficulty in allocation and the
requirement for extensive power-control. Power-control techniques necessitate full-duplex
operation and only work with single ground sites. Adjusting vehicle power to maintain
received signal level to a narrow range of values at multiple ground sites is obviously not
possible.
Since many test and training scenarios involve continually-changing demands on data link
system resources throughout the operation, a common datalink should employ a highly-
automated control system, capable of automatic reallocation of system resources such as
timeslots, operating frequencies, and relay configurations at all times, with minimal
complication. Although the exception rather than the rule, the system must accommodate
unanticipated demands for net entry and exit, sometimes from vehicles not included in the
original plan.
Since TCPs will accommodate multiple RF Heads and Interface Modules, a large variety
of transceiver configurations can be assembled, to accommodate a variety of applications.
System software will similarly be comprised of individual modules, use of which will
depend on hardware configuration. For example, a special software module that
implements diversity processing will be used when two of the same type of RF head are
employed to achieve high reliability with antenna diversity. Dual-band operation, or
simultaneous operation on multiple nets would similarly require additional software.
Since the system presently conceived will be heavily software-based, many of its operating
parameters would be changeable by software-driven command, and it would be possible to
optimize individual transceivers to particular applications remotely through use of special
system control messages that cause them to reconfigure appropriately. For example,
network control and management procedures in installations where drone control is the
principal application might be tailored to produce minimum latency. This would lead to a
system resource allocation process to which mission and scenario information is input,
from which in turn relay functions and timeslots are assigned to minimize delay, rather
than to maximize capacity. Having determined the appropriate system configuration and
timeslot assignments for a particular mission, and presented them to the system control
operator for approval, the control and management portion of the system then
automatically downloads specific assignments to individual transceivers. During the course
of the mission, as new demands arise, the control and management function reconfigures
the system to adapt to them, by sending additional control messages to individual
transceivers.
EXAMPLE APPLICATIONS
The advantages of a defined standard datalink capability are best illustrated by considering
two examples of how it would operate in different applications. These two examples are
presented below, each requiring a different set of system performance features. The first
example is reporting of vehicle status and TSPI from aircraft during a T&E operation. A
flight qualified transceiver with relatively high reporting rates and large coverage area is
required. The second example is of a large scale training exercise involving a large number
of users each with relatively lower throughput and latency needs. These examples of
diverse applications, while they are quite different in some parameters, such as the
numbers of players and the data rates that must be provided for each player, are
surprisingly comparable in many others, such as total capacity. Those applications with
many players generally don’t impose high update rates requirements, while those with high
reporting rate needs generally don’t involve many players.
In a Test and Evaluation exercise involving aerial vehicle status and TSPI, information
such as fuel and ammunition quantities, orientation, and various discrete events generally
originate from the vehicle instrumentation and are often available on internal data busses.
TSPI is generally derived aboard the vehicle, usually from a GPS receiver. Aerial
applications are often characterized by the need for higher-power transmitters, occasional
relay capability, generally low data latency, and link reliability of 95% or greater. These
applications generally allow long-range line of sight propagation and can operate at L-
Band or higher frequency. In general, only a small number of participants (1 to 10) must be
supported at any one time but require high message update rates, one per second or
greater.
Coverage requirements identified by the ranges varied significantly, since land and sea test
sites of many types were included. Mission coverage areas identified ranged from as small
as a 14x16 nmi rectangle to as large as a circle with a 350 nmi radius. In the vertical
dimension, ranges required coverage from sea level to altitudes as high as 100,000 ft. It is
recognized that line-of-sight propagation limitations restricted achievable single-hop
distances, and relaying is necessary to achieve distances beyond about 150 nmi.
Estimated datalink capacity requirements are derived from scenario details (numbers and
types of participants, message lengths and rates, anticipated need for relays) furnished by
the ranges. When TSPI message lengths were not stated by the ranges, a 480-bit length
(typical of many similar applications) was assumed. Related status messages were
included in the estimates when the range indicated intent to collect status data.
In a Large-Scale Training Exercise, maximizing total system capacity to accommodate the
maximum number of players, at maximum reporting rates, is generally the top priority. The
FIRST defined standard capability would allow the mission planner complete freedom in
assigning resources to players, to accomplish this objective:
• Message structures will allow each player to convey all necessary status information in
a single message. Messages each contain about 700 bits (not including addressing and
control), that can be structured in any way desired.
• Different reporting procedures can be used when required. For example, some timeslots
could be reserved for closed-loop reporting, where proper receipt of the information
must be confirmed. When such information is being sent to individual vehicles, one at a
time at low average rate, two timeslots per second might be assigned to this function,
one in which the control site sends a receipt-requested message to a different player
each second, the other of which contains the player's acknowledgment of receipt.
Failure to receipt prompts retransmission.
• Some timeslots could also be reserved for relaying. When a player's transmissions are
not received by the ground system, another player could intercept them and relay them
to the ground system on the reserved timeslots. Meanwhile, the ground system would
continue to monitor the out-of-contact player's assigned timeslots, and when messages
are once again received directly, it could cancel the relay function.
SUMMARY
The history of efforts to develop tri-service general solutions to the diverse problems of the
individual services reveals many "do-everything" systems that do not do anything well,
usually at excessively high cost. We intend to avoid those pitfalls in the definition a
standard datalink capability by taking full advantage of the flexibility and adaptability that
result from modern software-intensive communications processing. Just as the personal
computer has become ubiquitous because it can readily and inexpensively be adapted to a
wide range of applications through modular addition of hardware and software, likewise, a
standard datalink capability can become the desired solution to most range data
communications problems in the twenty-first century.
Doppler Video Signal Conditioning,
Theory of Operation
Tony Cirineo
Weapons Instrumentation Division, 543E
NAWC-WD, Pt. Mugu
Abstract
This paper describes some of the signal conditioning and processing circuits that were
developed to reconstruct the doppler video signal from a radar receiver under test. The
reconstructed doppler video signal is then digitized and put into a telemetry frame for
transmission to a ground receiving station.
Keywords
Introduction
The system described here consists of signal conditioning and processing circuits that
reconstruct the doppler video signal from a radar receiver under test. The reconstructed
doppler video signal is then digitized and put into a telemetry frame for transmission by a
telemetry transmitter to a ground receiving station.
Figure 1 shows a block diagram for the telemetry system. Since the radar receiver under
test has no buffered test points suitable for monitoring, buffering amplifiers were inserted
into the radar receiver or near the points to be monitored. The doppler processing circuits
take the Voltage Controlled Oscillator (VCO), 2nd Intermediate Frequency (IF) and 2nd
Local Oscillator (LO) signals from the radar receiver and reconstruct the doppler signal
along with a marker which is 20 kHz above the receiver tracking frequency. The output of
the doppler processing circuits are the reconstructed video with marker and a buffered tape
drive signal. The Analog to Digital Converter (ADC) and frame controller circuits digitize
the reconstructed doppler signal and put the digital data into a telemetry frame. The output
of the frame controller is a Non-Return to Zero Level (NRZ-L) Pulse Code Modulation
(PCM) signal. The PCM signal then goes to an encryption device, an isolation buffer, a
pre-mod filter and finally the transmitter. There is also a 75 ohm buffered output for
driving a wideband magnetic tape recorder for recording the analog reconstructed video
signal. The front receiver amplitude detector output is a signal from the receiver under test
which is buffer and make available to the another section of the telemetry system.
LO
Doppler Video
To Tape Recorder
Figure 2 shows a block diagram for the doppler processing circuits. The VCO, 2nd IF and
2nd LO signals are mixed together as shown to yield the reconstructed video and marker
signals. The product of the 2nd IF and the LO yields a signal at the sum and difference
frequencies. It is the difference frequency that composes the reconstructed doppler signal.
A band pass filter (BPF) of 3 kHz to 250 kHz limits the doppler spectrum to this range.
The low pass filter (LPF) removes the unwanted mixer products before the AGC.
The 20 kHz marker is generated from the difference of the VCO and the marker oscillator.
The low pass filter removes the unwanted mixer components. The marker signal is then
summed with the reconstructed doppler signal.
As shown in figure 3, The reconstructed video and marker are digitized by a flash eight bit
analog to digital converter. The sample rate is 625 ksps based on the system clock of 5
MHz. With the 250 kHz upper band edge of the filter shown in figure 2, the sample rate is
2.5 times the highest frequency. This allows sufficient over sampling to accurately
reproduce the doppler spectrum in the range covered by the band pass filter. The Altera
EPM 5032 is programmed with the frame controller firmware to allow the data to be
placed in the frame along with the synchronization words. The 74HC166 is a parallel to
serial shift register. There was not enough room in the Altera EPM5032 for this function so
it is placed external. The output of the 74HC166 is the PCM data stream.
Figure 4 shows the format of the telemetry frame. There are 1024 eight bit words in the
frame. The first two words contain the frame synchronization pattern, which is 1110 1011
1001 0000. Data is send most significant bit first.
Test Results
The overall system was tested to measure the sensitivity, dynamic range and
intermodulation performance. The test setup of the prototype circuits is shown in figure 5.
A number of prototype circuit boards, power supplies and RF signal generators are visible
in the photo. Not shown is the Marconi spectrum analyzer on which the doppler spectrums
were observed nor the network analyzer on which the individual circuits were
characterized.
The test results are summarized in figure 6. The measured MDS is about -120 dBm. The
largest input signal the system can handle without being driven into compression is about -
40 dBm. The dynamic range of the system is about 60 dB. The AGC limits the
fundamental components of the reconstructed doppler to a maximum level of 0 dBm. As
the AGC attenuates the signal the noise floor is suppressed. The third order
intermodulation produced are graphed as a straight line with a slope of three. In the
presence of a large signal the smallest signal observable is a -103 dBm input signal. Other
mixer spurious products appear above the noise floor at an input level of about -60 dBm.
The SFDR is shown in figure 6 as the difference between the fundamental components and
the noise floor or the third order intermodulation products or other spurious products.
Figure 5: Doppler video prototype circuits and test setup.
10
Fundemental Components
Third Order IM Products -10
-20
-40
Noise Floor
-50
-60
-80
-120 -100 -80 -60 -40 -20 0
Power In, dBm
Figures 7 through 12 show the reconstructed doppler signal monitored at the video buffer.
Figure 7 is a plot from a spectrum analyzer with an input signal of -110 dBm. This input
level roughly corresponds the radar receiver MDS+10 dB. As seen in figure 7, the target
doppler signal is at 50 kHz with the marker signal 20 kHz above at 70 kHz. The amplitude
of the marker is always constant and large. This way it is always the most visible signal in
the doppler spectrum. The target signal can always be located relative to the marker even
when the doppler spectrum is filled with clutter, ECM or other extraneous target signals.
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
-10.0
-20.0
-30.0
-40.0
*
-50.0
-60.0
-70.0
-80.0
Mkr1 -42.37dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 7: Analog output 50 kHz target at receiver MDS+10 dB input amplitude, with 20
kHz marker signal at 70 kHz
Figure 8 shows a target signal at an input level of -80 dBm, approximately MDS+40 dB.
At this level the AGC amplifier is just about to start reducing its gain.
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
-10.0
*
-20.0
-30.0
-40.0
-50.0
-60.0
-70.0
-80.0
Mkr1 -12.95dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 8: Analog output 50 kHz target at receiver MDS+40 dB input amplitude, with 20
kHz marker signal at 70 kHz
Figure 9 shows a target signal at an input level of -60 dBm, approximately MDS+60 dB.
At this level the over all system is just started into compression. The AGC amplifier is at
minimum gain. Spurious signals are just visible above the noise floor.
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
*
-10.0
-20.0
-30.0
-40.0
-50.0
-60.0
-70.0
-80.0
Mkr1 -0.90dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 9: Analog output 50 kHz target at receiver MDS+60 dB input amplitude, with 20
kHz marker signal at 70 kHz
Figure 10 shows a target signal at an input level of -40 dBm, approximately MDS+80 dB.
At this level the over all system is well into compression. Spurious signals are clearly
visible in the doppler spectrum.
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
*
-10.0
-20.0
-30.0
-40.0
-50.0
-60.0
-70.0
-80.0
Mkr1 -0.37dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 10: Analog output 50 kHz target at receiver MDS+80 dB input amplitude, with 20
kHz marker signal at 70 kHz
Figure 11 shows a target signal at an input level of -35 dBm, approximately MDS+85 dB.
At this level the over all system is clearly being over driven. Large spurious signals are
clearly visible in the doppler spectrum.
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
*
-10.0
-20.0
-30.0
-40.0
-50.0
-60.0
-70.0
-80.0
Mkr1 -0.52dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 11: Analog output 50 kHz target at receiver MDS+85 dB input amplitude, with 20
kHz marker signal at 70 kHz
Figure 12 shows a target signal at an input level of -20 dBm, approximately MDS+100 dB.
At this level the over all system is being over driven very hard. Very large spurious signals
are visible in the doppler spectrum.
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
*
-10.0
-20.0
-30.0
-40.0
-50.0
-60.0
-70.0
-80.0
Mkr1 -0.40dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 12: Analog output 50 kHz target at receiver MDS+00 dB input amplitude, with 20
kHz marker signal at 70 kHz
Figures 13 through 16 show the reconstructed doppler spectrum after the signal has been
digitized and reconverted back to an analog signal for display on a spectrum analyzer. The
decoder box used to decode the PCM signal consists of a digital to analog converter and
some digital logic circuits. Also the frame synchronization words are treated as data. This
results in a doppler spectrum that is much noisier than it should be. The decoder box was
not designed as a piece of test equipment as such the analog output signal does not have a
lot of filtering to remove the artifacts of the conversion process. The spectrum analyzer
plots are only shown to illustrate that the ADC and the frame controller circuits are
working. This verifies that data is being sampled, the frame synchronization words are
appearing in the frame and the PCM data is formatted correctly. The telemetry receiving
site would be expected to have a PCM doppler decoder that could properly decode and
process the data to minimize the frame synchronization and other artifacts. Or, optimally to
process the data in the digital domain and display the doppler spectrum directly by Fast
Fourier Transform without using a digital to analog converter.
Figure 13 shows a target signal at an input level of -100 dBm, approximately MDS+20 dB.
Notice the increased noise floor.
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
-10.0
-20.0
-30.0
-40.0
*
-50.0
-60.0
-70.0
-80.0
Mkr1 -42.82dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 13: PCM output 50 kHz target at receiver MDS+20 dB input amplitude, with 20
kHz marker signal at 70 kHz
Figure 14 shows a target signal at an input level of -90 dBm, approximately MDS+30 dB
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
-10.0
-20.0
-30.0
*
-40.0
-50.0
-60.0
-70.0
-80.0
Mkr1 -35.20dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 14: PCM output 50 kHz target at receiver MDS+30 dB input amplitude, with 20
kHz marker signal at 70 kHz
Figure 15 shows a target signal at an input level of -40 dBm, approximately MDS+80 dB.
At this level the over all system is well into compression. Spurious signals are not visible
above the noise floor.
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
-10.0
*
-20.0
-30.0
-40.0
-50.0
-60.0
-70.0
-80.0
Mkr1 -11.70dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 15: PCM output 50 kHz target at receiver MDS+80 dB input amplitude, with 20
kHz marker signal at 70 kHz
Figure 16 shows a target signal at an input level of -30 dBm, approximately MDS+90 dB.
At this level the over all system is well into compression. Spurious signals are clearly
visible in the doppler spectrum.
A dBm MARCONI
10.0 Atten 30dB 50 TG off 2382
0.0
-10.0
*
-20.0
-30.0
-40.0
-50.0
-60.0
-70.0
-80.0
Mkr1 -12.62dBm 49.6kHz
-90.0
A Ref 100.0kHz 20.0kHz/div Res bw 1kHz
Inc 20kHz 100ms /div Vid bw 700Hz
Figure 16: PCM output 50 kHz target at receiver MDS+90 dB input amplitude, with 20
kHz marker signal at 70 kHz
Summary
A general description of the doppler signal processing circuits was given. The description
proceeded from a block diagram level to some test results. The system has a dynamic
range of greater than 60 dB and a sensitivity of about -120 dBm.
RANGE ERROR IN TRANSMISSION
CHANNEL OF TT&C
Liu Jiaxing
The Southwest Insititute of Electronics Technology
48 East street, Chadianzi, Chengdu 610036, P. R. China
FAX (028)7768378
ABSTRACT
This paper summarize range error caused by instability of transmission characteristics and
change of singnal frequency and amplitude. On the basis of transmission system’s
modulation-demodulation combined characteristic, amplitude-frequency characteristics
even symmetry, phase-frequency characteristics odd symmetry, phase orthogonality of
demodulator, author analyses influence of factors on range accuracy. And formulas of
phase of ranging tone are derived. Using these formulas, the many factors having
infuluence on drift range error may be calculated, and range accuray can be improved.
Above conclusion has been testfied and applied in TT&C system for years.
KEY WORDS
INTRODUCTION
First of all, we analyse the influence of phase modulator and product demodulator. its
model is shown in figure 2.
In fact, Considering as system designer, it is end output singnal Bo(t), that is by system
required. the s(t) of phase modulator output is only a milway Signal between Bi(t) and
Bo(t). So this paper is focused on the relation of Bo(t)-Bi(t), i.e. modulation - demodulation
Combined Characteristic.
For figure 2
Bi(t) = AmsinTmt
S(t) = cos [Tct+msinTmt] (3)
Where, Tc - Carrier frequency
Tm - Raging tone frequency
m = KAm - depth of phase modulation
K - phase Constant
Am - Amplitude of Ranging tone
Using the Bessel function of the first kind, equ.3 is
(4)
Where, JN(m) is a Bessel function of the firsr kind of order n, product demodulation is
(5)
(6)
In figure 3,
H(T) - Amplitude-frequency Characteristics of RF/IF transmission network.
M(T) - phase-frequency Characteristics of RF/IF transmission network.
(7)
For the Product demodulator, when non-orthogonal phase difference exists, then
c(t) = cos[Tct+Mo+B/2+)M]
Where, Mo = M(Tc) - carrier phaseshift
)M - non-orthogonal phase difference of product demodulator.
Aop(t)×c(t) is realized in product demdulator, Filtering off 2Tm component from it, Bo(t) is
obtained as follwing:
Bo(t) = (1/2)J1(m)H+1sin[Tmt+(M+1-Mo)-)M]+
(1/2)J1(m)H-1sin[Tmt+(Mo-M-1)+)M]+ (8)
Where, H+1=H(Tc+Tm), H-1=H(Tc-Tm), Ho=H(Tc)
M+1=M(Tc+Tm), M-1=M(Tc-Tm) Mo= (Tc)
The vector diagram of The equ.8 is show in figure 4
Figure 4:
Vector diagram of product demodulation
(9)
(10)
equ.9 and equ.10 show that, Changes of M1 and B1 are Caused by instability of H+1, H-1,
(M+1, - Mo), (Mo, - M-1) and )M, range error is caused by M1 Change as well as by
amplitude-phase conversion which is caused by B1 change, at same time.
(1) If amplitude-frequency characteristics is even symmetry, i.e. H+1=H-1, then
M= (M+1 - M-1) 2
(2) If phase-frequency characteristics is odd symmetry, and phase is orthogonality
demodulator, i.e. (M+1 - Mo)=(Mo - M-1), )M=0, then
M = (M+1 - M-1) 2 , Jo = (M+1 - M-1) (2Tm), but J … Jo
Where J is envelope delay, Jo is delay time of ranging tone.
(3) If phase -frequency characteristics is lineatrity, then
Jo = (M+1 - M-1) /2 Tm = dM /dt, and J = Jo
Only on this special condition, M = TmJ , J = Jo.
As it is seen from abov, when amplitude - frequency characteristics non - even symmetry
exists, dirft range error is caused by change of non - orthogonal phase differen of
demodulator. When phase - frequency characteristics non - odd symmetry exists, dirft
range error is caused by change of amplitude - frequency characteristics. The Causes
which make Changes of H+1, H-1, (M+1 - Mo), (Mo - M-1) and )M= arc given below:
(1) H(T)ejM(T) Change: such as, instability of Circuit, AGC, Stanging Wave, etc
(2) S(t) spectrum Change: Such as, nonlinearity, spurious Amplitude modulation etc
(3) (Tc+Tm), Tc and (Tc-Tm) Change is Caused by signal Doppler frequency Shift.
NON-LINEARITY OF PHASE MODULATING CHARACTERISTICS
(11)
Where,
The equ.12 shown: Because of influence of non-Linearity, The phase modulation Convert
from single-frequency PM to Multi-frequency PM. On The basis of equ.12, we obtain
equivalent model of phase modulator with non-linearity charactristics:
In figure 5, the non - linearity network produces Tm1 and its harminic at output, thereby
multi - frequency PM is down in ideal modulator. When baseband amplifer have non -
linearity, the same model as shwon fiqure 5 is obtained, therefore their analyses method is
same.
(2) When 3 Tm distortion Component exists, (Tc + Tm1) and (Tc - Tm1) spectum
component may also be obtained, it is:
A[J1(m1)+J2(m1)J1(ma) + J4(m1)J1(ma)]cos(Tc + Tm1)t and
-A[J1(m1)+J2(m1)J1(ma) + J4(m1)J1(ma)]cos(Tc - Tm1)t
Specttrum is symmetry, and
H+1 = A[J1(m1)+J2(m1)J1(ma) + J4(m1)J1(ma)] (17)
H-1 = A[J1(m1)+J2(m1)J1(ma) + J4(m1)J1(ma)] (18)
Therefore, M1 and B1 be also calculated by equ.9
(1) When spurious AM caused by fundamental harmonic Tm1, the PM wave with spurious
AM is Written as follows:
s(t) = A[1+MsinTmt]cos[Tct + msinTmt]
= Acos[Tct + msinTmt] + AMsinTmtcos[Tct + msinTmt] = so(t) + sam(t)
Where, M - spurious Amplittude modulation Coefficient
So(t) - ideal PM wave
SAM(t) - Spurious Component caused by spurious Amplitude modulation
Using the Bessel function of the first kind, we have
s(t)=Acos[Tct + msinTmt] + (AM/2) [sin ((Tc + Tm)t + msinTmt)
- sin ((To - Tmt) + msinTmt) ]
(20)
(27)
(31)(32)
If r « 1, then
H (T).1 + rcosTJo, M(T). - rsinTJo, 33)(34)
equ.31 and equ.32 shown: The action of reflection as incident signal pass transmission
system with amplitude- frequncy characteristics H(T) and phase-frequncy
characteristics M(T), therefore, M1 and B1 may be calculated using the equ.9. Where,
(35)(36)
COMBINATION INTERFERENCE
(2) For demodulation output is caused by-sideband spectrum of opposite original phase.
(38)
(39)
(40)
(41)
It is show n in equ.39 that when the phase detector characterestics is deviated from ideal
sinus, the output signal of phase detector only have odd harmonic still. But they amplitude
is changed and the range error my result from amplitude-phase conversion which is caused
by Bo1(t) change.
CONCLUSION
As stated above, the drift range error is caused by change of H±n(A±n), M±n
)M at product demodulator input and the causes which make changes of H±n(A±n), M±n and
)M are given below and it may be calculated by equ.9
(1) amplitude/phase - frequency characteristic change of linearity transmission network.
(2) non-orthogonal phase differen of product demodulator.
(3) non-lineatrity of phase modulator and baseband amplifer
(4) spsurious amplitude modulation.
(5) reflection of PM wave.
(6) combination interference of multi-subcarrier PM.
(7) non-ideal phase detector
In addition, the factors affected the drift range error have as follows:
(1) phase-frequency characteristic and delay of baseband filter and amplifier
(2) PLL phase error of ranging tone.
(3) amplitude-phase conversion of baseband amplifer.
The factors will not be discussed in the paper, becaus the channel is presented rather than
the terminal, here.
The analysis methods and formulas of this paper are applied also to other subcarrier
signals in TT&C system of multi-subcarrier PM, so it have some universal significance for
transmission characteristics analysis of TT&C.
REFERENCES
ABSTRACT
It is always desirable to transmit several data signals simultaneously. This paper discusses
how one transmitter can transmit several data signals to several receivers at the same time
in a Point to Multipoint communication system . Two novel schemes are proposed. One is
communication with Multiple Phase Shift Keying(MPSK,e.g.8PSK),another is
communication with Direct-Sequence Spread-Spectrum Multiple-Access(DS/SSMA).
Their models are presented and their operations are illustrated. It is proved theoretically
that the communication properties of DS/SSMA are better than those of another.
KEY WORDS
INTRODUCTION
The demand for communication in morden information society is getting higher and hihger.
According to practical needs, various communication systems were produced, but how to
improve the effectivemess and reliability of communication is always a basic project and
problem of communication.
A block diagram of the proposed point to multi-points system model with MPSK is shown
in Figure 1.
In this system, base station transmits data to n subscribers at the same time. The symbols
s1,s2,...,sn respectively represent data sequences transmitted to n subscribers, which are
information bit strings represented by logical levels of 1 and 0.
n
With n input bits, there are 2 = M combinations and M symbols will be produced.
Multiple Psk modulator maps M symbols into M phase states of carrier. Carrier signal with
some phase state is transmitted(RF channel) to multiple subscribers through some RF
frequency (RF channel).
In this system, base station transmits data to n subscribers at the same time. The symbols
s1,s2,...,sn respectively represent data sequences transmitted to n subscribers, which are
information bit strings represented by logical levels of 1 and 0.
n
With n bits as input, there are 2 = M combinations and M symbols will be produced.
Multiple-Psk modulator maps M symbols into M phase states of carrier. Carrier signal
with some phase state is transmitted(RF channel) to multiple subscribers through some RF
frequency easily.
But some communication performances of this scheme in some situation are not perfect.
B. Performance analysis
At receiver end, when phases detected out of waveform received are different with phase
bound pre-determined, errors will be produced, the bound is ± π / M , as shown in
Figure 2. It can be testified that the error probability of bit groups(named as symbol error
probability Ps ) is [1]
M −1 1 π Es 1 (sin π / M ) π
e − y erf ( y cot
2
E /N0
P =
s
− erf [(sin ) ]− ∫ s )d y (1)
M 2 M N0 π 0 M
Define Q function as
1 ∞ − y2 /2
Q( x ) =
2π ∫
x
e dy
Es
Ps ≈ erfc[(sin π ) ]
M N0
(2)
and aproximately bit error probability is
P
Pb ≈ log
s
M
2
Output signal-to-noise ratio is
2 Eb π
(C / N )o ≈ (log 2 M )• sin (3)
N0 M
Ps 1
M=16
-1
10
-2
10 M=2
-3
10 M=8
M=4
-4
10
From formula (2), (3) and Figure 3., it is shown that if the number of phases increases,
performance will decrease, so the scheme of multiple-access with MPSK is suitable for the
situation where subscribers (address) are relatively few.
Similarly, according to the same principle, MQAM but not MPSK can also be applied to
carry on poin to multi-points communication.
POINT TO MULTIPOINT COMMUNICATION WITH DS/SSMA
Then these wideband signals are respectively modulated by same carrier cos(ωt + φ ) (IF
signal, such as 10.7MHz), and the n modulated signals are added (an added circuit made
up by integrated arithmetic amplifier can finish it) to produce one signal
n
S (t ) = ∑ d i (t ) PN i (t ) cos(ωt + φ ) (4)
i =1
which is transmitted by RF transmitter through a certain carrier frequency(channel)to
multiple subscribers.
where d1(t) is information data modulated with PN1(t) and could be demodulated out
correctly. According to the same principle, the other subscriber’s data information can be
demodulated out. In short, at every subscriber’s receive-end, using ordinary spread-
spectrum receiver without any modification, we can demodulate out the corresponding
data information.
B. Performance Analysis
( C N ) 0 = 2 Eb N 0 Pb = Q( 2 Eb N 0 ) (7)
It can be proved that the signal-to-noise ratio and bit error rate of DS/SSMA system with n
subscribers are
n − 1 N0 n − 1 N0
(C N )0 ≈ 3N
+
2 Eb
Pb ≈ Q( + ) (8)
3N 2 Eb
where N is the length of PN code. [2]
The bit error rates of different communication model are respectively given in Figure 5,and
6. Figure 5. show the symbol error rates of several communication schemes at different
transmission power. The bit error rates are shown in Figure 4. It is derived from Figure 5.
and Figure 6 that with few subscribers, i.e., the number of multipoints is few, the
performances of the two methods are similar, as the munber of subscribers increases, the
performances of MPSK scheme evidently drop. Because it demands higher signal-to-noise
ratio, this scheme is not applicable to the radio communication system and mobile
communication system with powerful noise.
Ps 10
-4
P 1
QPSK
QPSK
-1
10
-5 10
-2
-6 10
10
8PSK
DS/SS DS/SS
(n=3,N=127) 8PSK (n=1,N=127)
-7 -3
10 10
-8
10 -4
10
10 -9 DS/SS
DS/SS (n=3,N=127)
-10 (n=1,N=127) -5
10 10
8 10 12 14 16 18 20 22 0 -10 10 18
E b N 0 (db )
CONCLUSION
This paper proposes two special communication schemes used in point to multipoint
communication system. Both solve the problem of distributing data to multiple subscribers
with one transmitter at the same time. The two schemes are both suitable for the case of
fewer subscribers and easy to be realized. Especially, the scheme using DS/SSMA can be
used more widespreadly and is attractive, for it has characteristic of powerful anti-
interference and well secrecy.
REFERENCES
ABSTRACT
Eutectic (63% tin-37% lead) or near-eutectic (40% tin-60% lead) tin-lead solder is widely used
for creating electrical interconnections between the printed wiring board (PWB) and the
components mounted on the board surface. For components mounted directly on the PWB
mounting pads, that is, surface mounted components, the tin-lead solder also constitutes the
mechanical interconnection. Eutectic solder has a melting point of 183EC (361EF). It is
important to realize that its homologous temperature, defined as the temperature in degrees
Kelvin over its melting point temperature (Tm), also in degrees Kelvin, is defined as T/Tm. At
room temperature (25EC = 298K), eutectic solder’s homologous temperature is 0.65. It is
widely acknowledged that materials having a homologous temperature $ 0.5 are readily
subject to creep, and the solder joints of printed wiring assemblies are routinely exposed to
temperatures above room temperature. Hence, solder joints tend to be subject to both thermal
fatigue and creep. This can lead to premature failures during service conditions. The geometry,
that is, the lead configuration, of the joints can also affect failure. Various geometries are better
suited to withstand failure than others. The purpose of this paper is to explore solder joint
failures of dual in-line (DIP) integrated circuit components, leadless ceramic chip carriers
(LCCCs), and gull wing and J-lead surface mount components mounted on PWBs.
KEY WORDS
Eutectic tin-lead solder, solder composition, surface mounted component (SMC), dual in-line
(DIP) package, leadless ceramic chip carrier (LCCC), gull wing leaded quad flatpack (QFP),
J-lead leaded chip carrier, solder joint, solder joint lead compliance, printed wiring board
(PWB), FR-4 epoxy/fiberglass PWB, printed wiring assembly (PWA), solder joint failure,
solder joint reliability, thermal fatigue failure, creep failure, gull wing lead configuration, butt
mount lead configuration, thermal cycling, coefficient of thermal expansion (CTE), difference
in CTE, dwell time.
INTRODUCTION
It is well recognized that mechanical strength in soft solders (Tm < 450EC) is subject to both
thermal fatigue failure and creep failure. Thermal fatigue failure being the failure the solder
joint experiences due to changes in temperature. This type of failure is driven by differences in
the coefficient of thermal expansion (CTE) between the electronic component, e.g., DIPs, and
the printed wiring board (PWB). Creep failure, on the other hand, refers to failure at constant
load over a period of time. Thermal fatigue failure, then, refers to varying loads due to CTE
mismatch differences, whereas creep refers to time-dependent failure because the homologous
temperature $ 0.5. Soft solders such as eutectic tin-lead solder are notoriously subject to both
types of failure.
In modern electronic printed wiring assemblies (PWAs), hundreds of solder joints are routinely
created during the process of assembly. The specific purpose of solder is to form a
metallurgical interconnection, or bond, between the land pads on the PWB surface and the
components’ terminations, or leads. The process of producing a metallurgical bond between
the component and the PWB constitutes the most important interconnect method in PWA
manufacturing. It is important that these solder joints (SJs) function reliably under the expected
service conditions for an expected period of time. Failure of even one SJ can compromise a
mission, cause it to be aborted, or cause total failure, with the possible loss of life. It is
important, then, to understand the different factors in SJ reliability so that they can be
controlled.
There are a number of factors to be considered in solder joint reliability. Among these are,
rated in importance from top to bottom:
(1) CTE differences between the component and substrateCthe global CTE mismatch.
(2) Component size (the larger the worse)
(3) Solder joint geometry (lead configuration), e.g., gull wing, J-lead, butt, etc.
(4) Solder joint height (h) and joint compliance
(5) Temperature excursions, both environmental and component internal power
dissipation.
(6) Thermal dwell times.
(7) Solder composition and solder microstructure.
(8) Soldering process used, including solder cool down rate.
(9) CTE differences between solder and lead material and solder and pad materialCthe
local CTE mismatch.
(10) Manufacturing defects, voids in solder, component misregistration, etc.
(1) Build test PWAs with various types of components and various types of PWB
substrate material, subject them to thermal cycling, and periodically examine them for
failure. A suitable failure criterion must be used and consistently applied.
(2) Model the behavior of SJs using a suitable theoretical approach, such as Coffin-
Manson, etc.
(3) Model the behavior of SJs using computer-assisted techniques such as finite element
analysis (FEA), or finite element modeling (FEM), which is an extension of (2).
Although modeling is a useful guide to SJ failure analysis, there is yet a great deal of work to
be done in this area. Manufacturing test PWAs with suitable components and thermal cycling
these is a very viable approach.
The approach used to investigate SJ failures was to use test PWAs with several types of
component mounted on the PWB surface. Since it was not practical to vary all of the
parameters affecting SJ reliability, a number were held constant. For example, the only type of
PWB substrate material used was FR-4 epoxy/fiberglass. Only eutectic tin-lead solder was
used to create the interconnections. Several types of components, however, were employed.
Figure 1 below shows the different solder joint configurations. Component size, solder joint
geometry, solder height, and joint compliance were all factors. The types of components used
were:
(1) DIPs having leads configured in a gull wing shapeC14 leads at 0.100 inch pitch.
(2) DIPs having leads extending straight down, that is, butt leadsC14 leads at 0.100
pitch.
(3) LCCCsC20 terminations, 28 terminations, 68 terminations; all at 0.050 inch pitch.
(4) Gull wing leaded ceramic quad flatpacks (QFPs)C68 leads at 0.050 inch pitch.
(5) J-lead ceramic chip carriersC68 leads at 0.050 inch pitch.
The other factor varied was the thermal cycles used. The DIPs were subjected to two different
thermal cycles:
(1) Cycle 1: started at 20EC with a decrease of 2EC per minute until a temperature of
-25EC was reached. The dwell at -25EC lasted for 14 minutes, followed by an increase
in temperature to +100EC at a rate of change of +2EC per minute. The high temperature
dwell was 32 minutes, followed by a decrease in temperature to 20EC at a rate of 2EC
per minute. The duration of one cycle was 171 minutes. The )T1 = 125EC.
(2) Cycle 2: started at 37.5EC with a dwell of 36.5 minutes and then a ramp up in
temperature to +100EC at a rate of change of +1EC per minute. The high temperature
dwell was also 32 minutes, followed by a decrease in temperature to 37.5EC at a rate of
1.6EC per minute. The duration of one cycle was also 171 minutes. The )T2 = 62.5EC.
That is, )T2 = )T1/2. The cycle time was the same for both cycles.
All of the LCCCs, gull wing lead ceramic QFPs, and J-lead ceramic chip carriers were
subjected to the standard NASA thermal cycle. This cycle begins at 25EC with a decrease of
2EC per minute until a temperature of -55EC is reached. The dwell at -55EC lasts for 45
minutes, followed by an increase in temperature to +100EC at a rate of charge of +2EC per
minute. The high temperature dwell is also 45 minutes, followed by a decrease in temperature
to 25EC at a rate of 2EC per minute. The duration of one cycle is 246 minutes. DTNASA =
155EC. The thermal cycling was initiated: for LCCCs in August 1993; for the J-lead ceramic
chip carriers, in January 1994; for the gull wing QFPs, in February 1994.
The failure criterion in all cases was electrical discontinuity using an Anatech® continuity loss
event detection device. All of the components were mounted on FR-4 epoxy/fiberglass boards
and soldered to the boards. In the case of the DIPs, there were five (5) butt-mounted DIPs and
five (5) gull wing mounted DIPs. See Figure 2 below. For all the other components, there was
one component per board. Figure 3 shows a 68 termination LCCC and its corresponding
footprint on the FR-4 epoxy/fiberglass PWB. Figure 4 shows a 68 lead ceramic QFP and its
corresponding footprint on the FR-4 epoxy/fiberglass PWB. Figure 5 shows a 68 lead J-lead
ceramic chip carrier and its corresponding footprint on the FR-4 epoxy/fiberglass PWB.
The leads of each component were daisy-chained together through a set of electrical
connections involving shorting wires between the lead shoulders and between solder land pads
on the PWBs. As a backup to the Anatech, both visual and SEM were used to survey the SJs
periodically while they were being thermal cycled.
EXPERIMENTAL RESULTS
68 68
NASA 20 28 68 termination termination
thermal cycle termination termination termination ceramic J-lead chip
)T = 155E
EC LCCCs LCCCs LCCCs gull wing carriers
QFPs
CONCLUSION
Thermal cycling is an effective way of accelerating solder joint failures. It is evident from
Tables 1 and 2 that the butt lead DIPs failed much faster than the gull wing DIPs. It appears
that the butt mounted DIPs fail chiefly by tensile failure, which is quite rapid. It is also clear
that butt mounted SMCs should not be used at all for space flight missions. In the case of the
gull wing DIPs, as they experience thermal cycling, crack initiation shows first around the heel
of the joint. As thermal cycling continues, the crack propagates along both sides of the lead
foot until the toe area is reached. Then the lead separates from the solder pad. Figures 6-7
show SEM photomicrographs of a crack in a gull wing lead. However, the more compliant the
lead, the more the stress energy is absorbed by the lead material, thus preventing overly rapid
crack growth. Also, doubling the )T roughly halves the number of cycles to failure.
Reviewing Table 3, the LCCCs also fail very fast. They fail chiefly through shear strain
fatigue due to the )T and the extreme difference in CTE between the ceramic LCCCs and the
PWB material. The approximate )CTE = 10 x 10-6. Note that the larger the component, e.g.,
the 68 termination LCCC, the faster it fails. One can draw the conclusion that it would be
highly risky to utilize LCCCs on space flight hardware. If the CTE mismatch is minimized by
using a suitable material such as nonwoven aramid, perhaps low termination LCCCs (20 or 28
terminations) could be used for short flights (days). But even then this is risky. Both the gull
wing and the J-lead ceramic SMCs have a great deal of compliance in the leads. Hence, they
can endure an increasing number of thermal cycles prior to failure. The principal conclusion to
be drawn from this investigation is that solder joint geometry plays a very significant role on
solder joint life.
POST SCRIPT
JPL, in conjunction with other partners from academia, government, and industry, is currently
engaged in investigating the reliability of ball grid array (BGA) packages. Again, the approach
was to manufacture BGA PWAs and subject them to rigorous thermal cycles in order to
induce electrical opens, which continue to constitute the failure criterion. JPL is now using the
National Instruments LabVIEWTM software, a graphics based operating system, and SCXI
hardware to monitor and control the thermal chambers, thermal chamber temperatures, and
automatic detection of electrical continuity. The data acquisition program, DAQ.VI, was
written using LabVIEW. The data are automatically gathered from interface cards and logged;
a suitable operator interface at a personal computer CRT is also provided. This new
software/hardware system can monitor over 1500 channels for electrical continuity and 32
channels for thermal chamber temperature. The system greatly simplifies the task of monitoring
and tracking failures and the conditions when failures occur of a large number of solder joint
channels through the automatic gathering and recording of the test results onto a personal
computer database.
ACKNOWLEDGEMENTS
Much of the initial thermal cycle work was performed by the late Dr. John W. Winslow. This
work is now being conducted by Ms. Sharon Walton. The authors wish to extend their
sincerest thanks to these two individuals for their excellent work in gathering the data
presented in this paper. The research described in this paper was conducted by the Jet
Propulsion Laboratory, California Institute of Technology, under a contract with the National
Aeronautics and Space Administration.
Solder fillet
Solder fillet
Solder ¼llet
Figure 4 Ceramic quad flatpack with 68 leads + its footprint on the FR-4 PWB
Figure 5 Ceramic leaded chip carrier with 68 leads + its footprint on the FR-4 PWB
Figure 6 Gull wing: Side view showing crackC
Ctop at 30x and bottom at 150x
Figure 7 Gull wing: Front view showing crack along sides and toeC
Ctop at 50x/bottom at 150x
SYSCON 2000
AND
THE DESA DATA RELAY SYSTEM
Norman Anderson
DESA/BALL
2251 Wyoming Blvd. SE
Kirtland AFB, NM 87117-5609, USA
[email protected]
ABSTRACT
KEY WORDS
Image Relay, Data Relay, COTS, DESA, LAN, WAN, Video Teleconferencing
INTRODUCTION
In 1992 a DESA test program required that large amounts of digital test data be collected
simultaneously at a number of locations over a very widely dispersed area. Once collected,
this data was to be transferred to a central location in near-real-time for in-progress
analysis. Since these tests were to be conducted continuously over a period of several
weeks, a cost effective solution was needed. In-house projects employing commercial
networking technology offered a ready solution. This first effort met all test data collection
goals: an aggregate 100 kbps data transfer rate, 100% data reliability, and data lag times of
less than 10 seconds. The system also provided automatic data archiving and remote test
site reconfiguration. The system used standard telephone lines, so no special
communications links were needed.
This basic capability was expanded to meet more demanding requirements over the
following two years. Data rates were improved, error correction processes were
automated, and equipment installation and removal procedures were streamlined. As
technology improved, equipment was replaced with easier to use, higher performance gear.
A number of data relay technologies were tried, and refinements made to best improve the
overall system. Portability concerns had been an issue from the start, and as commercial
demands for the “mobile” office grew, the resulting technology was incorporated. Other
projects began to use the basic concepts of this system, and a number of variants were
used to meet DESA’s diverse client needs.
The current iteration of the SYSCON 2000 system is based on a modular design that can
be tailored to meet a wide range of test requirements. Each module is designed around a
collection of off-the-shelf technologies that provide a range of performance and
compatibility that can be chosen to best match the data collection and management
requirements of a particular test program. The basic modules support the processes of data
collection, data transfer, test management, and records archive.
Data collection requirements in the field typically cover a wide range of technical
capabilities. Most data collection is supported by personal computers (PCs) with a number
of interface options. The most common interfaces are to analog signals and digital serial
data. One of the desirable features of using a PC for these interfaces is the large variety of
interface “adapters” available. Multi-channel analog measurements taken at hundreds of
thousands of samples per second are common, and can be made using a wide variety of
vendors products. These measureements can easily be made “smart” through the use of
COTS control software. Hundreds of channels can be supported by a single PC, and PCs
can be combined on a local area network to provide an almost unlimited number of analog
channel measurements. Data from digital sensors, typically provided in a serial data stream
at rates up to 115kbps, can likewise be collected via PCs and combined using local area
networking technology to provide collection and transfer support for hundreds of channels.
More specialized data collection requirements can be met in a similar fashion. Such was
the case for a recent set of requirements that included one for digitally captured and stored
test control voice communications. A number of industrial-grade imbedded PCs were used
with audio digitizer boards to provide a 6 channel capture and storage system that was
controlled via a local area network. The entire system occupied a 4U high equipment rack
box, and was integrated, tested, and deployed for field use in less than 30 days.
Digital and analog output, and automated closed loop control are also available using PC-
based industrial control modules. Interfaces to test equipment via IEEE-488 or EIA-232D
protocols are common, and also enable remote control of this equipment from throughout
the test network. Although laptop and industrial-grade portable computers supply the bulk
of the interface control, sometimes environmental or large capacity requirements force
other solutions. For these conditions, industrial-grade rack-mount computer and digital
storage systems are used. While more bulky than the typical PC used in field work, these
systems will perform flawlessly under the harshest field conditions.
Once the PCs have collected the data in the field, there are a number of choices for
transporting the data to the centralized test control activity. For single collection points
(those using a single PC), the method of choice is via serial data line over existing media.
Depending on data rate requirements and the acceptability of the estimated cost, this media
could be a cellular phone link, a telephone line, a point-to-point RF data link, or even a
satellite data link. All these methods have been used on recent projects. Where there are
several data collection points essentially co-located, the PCs are typically inter-connected
using a COTS local area network (LAN). This interconnect provides access to a shared
data channel that transports data at rates between 10 and 100 Mbps.
This network can be designed around COTS components using a variety of topologies and
physical media. The exact design chosen is dependent on test requirements and the
collection environment. The number of computers that can be multiplexed on the data link
is driven by the network topology and control protocols, but up to 20 are not uncommon.
Much larger numbers are possible, but more attention must be paid to system throughput
rates for a large system to supply acceptable performance. When near-real-time relay of
the test data is required by the data management scheme, the LAN is connected to a wide
area network (WAN) router. The router acts as a buffer and media adapter that allows the
information on the LAN to be transfer to remote locations (such as the test control center).
A key feature of current WAN router technology is the ability to interface to a number of
transfer media, and to automatically scale the use of these resources on demand. Each of
the media access interfaces (typically 8 o 16 channels on most routers) can be configured
for synchronous or asynchronous data transfer at rates from below 1kbps to 45Mbps. The
router selects the type and number of channels to be used based on a predefined scheme
and the current data transfer demand. Automatic channel management has several
advantages. For metered service, costs can be controlled. Alternate data routes can
automatically be activated if a link fails while in use. One easily overlooked advantage to
the use of router technology is that of multiplexing lower data rate channels to provide a
virtual high-speed data channel where one does not exist. This capability is typically used
at sites where the only available communications media is public switched telephone
network (PSTN) lines. In a typical installation using a WAN router, 16 telephone lines can
be multiplexed at data rates of 28.8kbps (and sometimes higher), providing a theoretical
460.8kbps data rate. Network and multiplexing overhead typically reduce this value by
approximately 50%. The real advantage to this approach is that no special arrangements
(or additional expense) are necessary for installation of a high speed data line to get high
speed data performance
When PCs are used in a data collection system, and the system is joined together using
LAN and WAN technologies, real-time remote test management becomes a real capability.
Desktop video teleconferencing can provide test site control from a centralized test
operations facility. Beyond the face-to-face implementation that is a typical use for this
technology, DESA has demonstrated the ability to relay near-real-time sensor data and
video. These same tools provide remote sensor control. The key feature here is that the
software and hardware necessary to implement this capability is COTS. Therefore the cost
is low, the technology is improving rapidly with industry funding the research efforts, and
the products exist in a competitive environment. The one drawback to using LAN/WAN
technology for test control is that the process is not deterministic, i.e. the time of arrival of
the data will vary slightly with traffic load, and data packets can be lost in the transfer
process. Thus, this technology is not suitable for extremely critical operations (such as
high-speed event control) unless measures are taken to provide redundant data paths and
prevent information bottlenecks.
The final piece to the SYSCON 2000 design is automated data archiving. COTS software
and hardware designed specifically to operate in a LAN/WAN environment provide
redundant data storage at the point of collection as well as at the test control center. With
the current state of the art in digital image compression, even the most unwieldy image and
data collection and storage requirements can be supported with COTS hardware and
software.
FUTURE DEVELOPMENTS
The basic premise of SYSCON 2000 is to employ COTS hardware and software to meet
portable data collection requirements. Future developments of the system are thus driven
by availability and customer need. There are, however, a few trends. Recent commercial
developments in video data compression are under review for inclusion in the SYSCON
2000 “tool kit”. This capability will serve to reduce bandwidth requirements for high
quality video imagery. As computing power increases and dimensions decrease, more and
more systems under test are able to host a built-in PC for data collection and relay. At least
one test platform has been considered for and entire LAN/WAN suite. As commercial low-
bandwidth, worldwide services (such as Motorola’s Iridium system) become available,
these data channels will be employed for the low data rate, highly mobile test articles.
CONCLUSION
The SYSCON 2000 system has proven to be a cost-effective solution for data transfer and
test control in off-range applications. The flexibility of the over all topology, and the
inherent advantages of COTS hardware and software has allowed DESA to satisfy
customer demands for high quality testing at a low cost.
ACKNOWLEDGEMENTS
I would like to acknowledge Mr. Steve Fox (of Southwest Software Associates) for his
insight in describing the original concept that eventually led to the SYSCON 2000 project.
Mr. Jason Vargas (of Ball Systems and Engineering Operations) and his work to optimize
the data transfer configuration on several versions (including the latest) of the SYSCON
2000 wide area networking protocols were instrumental in proving that reasonable data
transfer rates were possible with the system design.
REFERENCES
CISCO Systems Router Products Configuration and Reference, Release 9.1, September
1992
Robert J. Reid
Nancy Callaghan
ABSTRACT
The 21st Century Live Play (21CLP) program is developing a mobile, low cost, wireless
networking system that supports applications to provide a number of services for military
use. 21CLP is a joint Defense Advanced Research Projects Agency (DARPA) and Central
Test and Evaluation Investment Program (CTEIP) project. The Naval Undersea Warfare
Center, Division Newport (NUWCDIVNPT), Code 382 has been assigned as the program
manager for a T&E version of the 21CLP system.
The 21CLP vision is a common instrumentation function that links, in real-time, live land,
air and maritime entities together with a virtual battlespace in any location where forces
are deployed or being trained, weapons systems are being tested and evaluated, and
ultimately where missions are being conducted. This vision will be realized with an
embedded, mobile, distributed, untethered system that requires little or no site preparation.
KEY WORDS
INTRODUCTION
The objective of Twenty First Century Live Play (21CLP) is to demonstrate a technology
development venture that provides real-time mobile networking services end-to-end.
21CLP will develop a networking architecture that allows the interaction of live and virtual
entities in a common battlespace during live Test and Evaluation (T&E), training and
actual war. This network architecture supports maritime, land and air entities. 21CLP
consists of a mobile, tetherless, entity-level common network architecture that serves as an
applications pipeline, providing dynamic bandwidth to the battlefield with a guaranteed
Quality of Service (QoS). The distinguishing feature of this development is that the
architecture focuses on providing connectivity from the entity (i.e., a soldier, vehicle, or
system under test) upward to existing Command, Control, Communication, Computers and
Information (C4I) systems. Figure 1 below depicts the vision for coupling live and virtual
entities for training and testing. The 21CLP network technology will provide connectivity
to and between all entities on the live range.
SATELLITE
RELAY
AIRBOURNE
EQUIPMENT
UAV
RELAY
MARITIME
EQUIPMENT
FIXED
HANDSET MOBILE GROUND BACKBONE
NETWORK
RADIO RELAY
CONTROL CENTER
21CLP is a joint project supported by CTEIP and the DARPA Warfighter’s Internet (WI)
program. The CTEIP envisions this coordinated effort as a way of injecting DARPA’s
advanced technology developments into T&E. WI is satisfying the mobile networking
requirements for several application programs including T&E. This program will provide
proof of concept demonstrations that show C4I and networking can be performed at the
combatant level during battle and training with interaction of various entities, within a
common architectural design that provides communication connectivity. The major goal is
to provide real-time networking services to the combatant. The joint program explores and
evaluates the technical alternatives, refines the most capable options, identifies high risk
areas and then develops and demonstrates the resulting capability. The WI/21CLP project
provides the mobile wireless real-time networking infrastructure extension to the Test and
Training Enabling Networking Architecture (TENA) and the Real-Time Information
Transfer Networking (RITN) under DARPA’s Synthetic Theater Of War (STOW)
projects.
Live testing and training are significantly enhanced by the use of Distributed Interactive
Simulation (DIS) to support the interaction of live and virtual entities in a common
battlespace. This requires the need to develop a common architectural approach which
provides the capability for live instrumented entities to interact with virtual entities within
the battlespace, at any time and in any location with little or no site preparation.
With a common wireless and wired real-time Internet infrastructure, one can also provide,
at some level, a sharing of sensor information with the forward and rear forces. This
capability can aid C4I objectives as well as individual Warfighter’s ability to make tactical
decisions.
During the T&E of systems, there is a requirement to analyze the system under test during
real-time. The T&E goal is to generate realistic operational scenarios with which to stress
the system, collect real-time data regarding the state of the system as a function of time,
provide range safety and control during the conduct of the testing, process, archive,
analyze data relative to the performance of the system and provide system performance
assessments supporting the system developers and operators. The WI/21CLP project will
provide the data for this kind of assessment and reduce the time needed for the After
Action Review (AAR) process, allowing modifications to the test or mission plan in real-
time.
NETWORK ARCHITECTURE
The network architecture will strive to have functionality that is supported by a scaleable
(i.e., variable number of users) system architecture developed using object oriented
standards where practicable and in consonance with the High Level Architecture (HLA),
DIS (IEEE 1278.x) and the TENA. The architecture must be sufficiently comprehensive so
as to guide the development of subsystems, identify interface requirements and select
standards which need to be demonstrated. Additionally, efficient Internet protocols for this
system shall be considered.
The network architecture for WI/21CLP will consider compliance with guidance provided
in the Department of Defense (DoD) Technical Architecture Framework for Information
Management (TAFIM). The architecture will also be developed to incorporate widely
accepted commercial standards for information processing and information transport as
needed by the program. The philosophy of design will be to adopt rather than develop,
except where technical needs dictate otherwise.
Figure 2 below depicts the vision for coupling live and virtual entities for training and
testing. In order to support human-to-human and human-to-simulation interactions, the
communications network must be capable of transmitting messages between entities within
a maximum of 0.3 seconds, which is the empirical time delay before a human looses the
sense of realism. This will put considerable constraint on the design of the system. The
network must also support multimedia capability which will require a dynamic bandwidth
allocation scheme to allow large throughput and full use of network bandwidth. An
instance of the use of this capability may be live imagery sent from forward entities to rear
entities to enhance the rear entities battlefield assessment.
The architecture will need to be designed with a possible multilevel security scheme to
allow protection of data via encryption at the appropriate level. For combat, the
architecture will need to allow for a stringent authentication service so that information on
the network will not fall into enemy hands. Although the WI/21CLP architectural design
will consider security, early prototypes will not include a multilevel security system.
The trend in T&E ranges is towards the linking of the various distributed facilities and
assets for interoperability and remote asset use. This can be achieved by networking the
ranges together and utilizing a common architecture which uses standard protocols,
including DIS, and compatible, Interoperable instrumentation and interface packages.
WI/21CLP extends this concept so that entities or aggregates of entities can be tested on
or off any dedicated range. This mobile capability is supported by a hybrid layered
hierarchical solution that combines entity LAN, cellular telephone, packet radio (or UAV
relay) and ATM gateway technology to support multimedia services that are scaleable
without site preparation, maintaining end-to-end QoS in dynamically changing
environments. The tetherless support concept is also a key issue to using the same
communications architecture in actual war situations and to extending training to the pre-
staging environment.
DSI BACKBONE
NMC ECC
..
.. .. LAN
LAN
..
SCC
MRC LAN
VIRTUAL
LIVE RANGE RCC
BATTLESPACE
The concept for the WI/21CLP will be to implement by integration the appropriate
DARPA, Naval Research Lab (NRL) and U.S. Army Communications Electronics
Command (CECOM) technologies as well as to develop new technologies as required.
The DARPA Global Mobile InfoSystem (GLOMO) is developing research and
development technologies to support the mobile networking problems. The WI/21CLP
project will capitalize on these technology investments. Additionally, CECOM is
supporting the Battlefield Information Transmission System (BITS) program. BITS is the
Army’s plan for communications on the future digital battlefield. Sea Dragon is the
Marine’s thrust to support their Naval expeditionary role which requires Command and
Control (C2) and networking. These technologies will also be leveraged in the
development of the WI/21CLP. Finally, NRL is developing the Real-Time Information
Transfer Network (RITN) for STOW as well as the NRL Data/Voice Integration Advance
Technology Demonstration (ATD). This ATD supports the mobile fleet with multimedia
services. These and other applicable DARPA/DoD emerging technologies will be
employed in the development of the WI/21CLP.
Several emerging commercial technologies provide the means for implementing proof of
concept engineering demonstrations for the WI/21CLP architecture. Commercial
technologies, along with an Open Systems approach enable the feasibility of this
development. Some major emerging technologies and standards being considered in this
architecture are:
Asynchronous Transfer Mode (ATM) and Broadband Integrated Services Digital Network
(B-ISDN) are emerging commercial standards for efficiently and globally networking
multimedia applications at high speeds. The technologies to implement ATM standards are
evolving rapidly.
New Protocol Data Units (PDU) are being developed, under the aegis of the DIS
Standards process, to specifically address unique constraints of radio frequency
connections in simulated environments.
REQUIRED CAPABILITIES
Diverse Environments The network architecture must operate worldwide in diverse and
harsh environments (urban, jungle, mountains, plains, etc.) and in all type of weather
conditions. Furthermore, the operating environment will not require a pre-established
terrestrial infrastructure.
Multimedia Services The network architecture must support a variety of network services
including voice, data, graphics, and video. Dynamic bandwidth allocations are required
and may need the use of adaptive modulation, link control and coding. Data error rates
must be low and a latency time of 0.3 seconds between entities is required for critical
events.
Security The network must be secure end-to-end and include multi-level encryption.
Authentication and digital signatures should also be addressed as well as the protection
scheme from enemy “spoofing” in actual warfare.
Scalability The network architecture must enable the exchange of C4I messages at the
entity level, which requires the conceptual development of cellular peer-to-peer
communication protocols. The number of entities could potentially be large, on the order
of 100,000, with a ratio of one live participant to every ten virtual and constructive
participants. Other network attributes include dynamic multicast groups, connection and
connectionless services, and distributed control.
Standard Sensor Interface The network architecture must be able to provide standard
protocols to planned sensors such as Global Positioning System (GPS), combat status
monitors, special sensors such as a real-time casualty assessment system (RCAS) and
emerging C4I systems such as the Personal Status Monitors (PSM).
Seamless The network architecture must achieve seamless interaction between live and
virtual entities to support complex T&E and training exercises.
Mobile The network architecture must support mobile, untethered communications at the
entity level for T&E exercises, Training exercises and actual war. No investment in range
infrastructure is required.
The WI/21CLP project will provide the seamless network interfaces between sensors,
radios and individual combatants and the operation/strategic network (e.g. the Defense
Simulation Internet (DSI) and the Defense Information Systems Network (DISN)) to
provide end-to-end networking services. The architecture will be developed in such a way
as to provide a generic interface for sensors while providing the networking services
required. The following are a number of applications that require these networking
services.
T&E Test and Evaluation requires the generation of realistic operational scenarios with
which to stress the system under test, collect real-time data regarding the state of the
system as a function of time, provide range safety and control during the conduct of the
testing, process, archive, analyze data relative to the performance of the system and
provide system performance assessments supporting the system developers and operators.
The WI/21CLP will provide the data for this kind of assessment and reduce the time
needed for the after action review process, allowing modifications to the test or mission
plan in real-time.
Training Training requires monitoring live player state, scoring combat engagements,
communications, control of the operation, logging of data and communications, and the
coupling of live, virtual and constructive entities in support of simulations.
GPS The Global Positioning System will interface with the architecture during the
demonstration to show the capability of locating an entity without using range equipment
(the tetherless concept). The GPS system has the potential to work at any location on earth
and has a 24 hour a day and 365 days a year coverage.
PSM The Personal Status Monitor being developed for DARPA is another possible
candidate for an application to use the network architecture. The PSM provides entity
location and health status of each human entity on the battlefield. This technology can help
to reduce battlefield casualties by providing a locating and pre-triage capability for the
medical corp.
BADD The Battlefield Awareness and Data Dissemination program, being developed by
DARPA, is in search of technologies to deliver a consistent picture of the battlefield to the
warfighter. BADD will be providing tools to support the changing battlefield situations.
DIS The next generation DIS standard, 3.X, will define a more flexible set of protocols in
order to streamline communications for the growing number of participants in simulation
exercises. Use of the DIS protocol and the Defense Simulation Internet (DSI) can provide
simulated battleforces to play against the live entities. This capability will need to be
incorporated at the entity level.
21CLP is a joint project supported by CTEIP and the DARPA Warfighter’s Internet
program. The CTEIP envisions this coordinated effort as a way of injecting DARPA’s
advanced technology developments into T&E. This joint venture is structured as a four
year Concept Development program to develop advanced technologies to support the
21CLP vision. At the end of the concept development program, the CTEIP will initiate a
new project to instantiate 21CLP technologies at a single range and DARPA will transition
the technologies to the Military Services. Because the first half of the program will focus
on Research and Development (R&D), DARPA funding will be greater than CTEIP
funding. The last half of the program will focus on the integration of resultant R&D
technologies and consequently, the CTEIP funding will increase once the technology is in
place. The WI has been proposed as a DARPA new start program to be funded by
DARPA in 1998, at which time the program will be executed as described here.
Lossless Compression of Telemetry Data
Abstract
1. Introduction
2. Schemes Examined
Compression algorithms can be classified into the following four classes which will be
described in some detail in subsequent, sections. We list these classes in order of
increasing complexity:
- ad-hoc techniques
- dictionary techniques
- prediction techniques
The ad-hoc techniques deserve some attention for it includes the well-known run-length
encoding algorithm. This algorithm has applicability to data containing long runs of like
symbols (or bits). A run-length encoder achieves compression by replacing a run of like
symbols with a codeword indicating the length of the run. The ad-hoc methods that are
covered here are two types of run-length coding (one for runs of 0’s and one for runs of
1’s) as well as a two-mode run-length coding approach.
The dictionary schemes are probably the most well known. We are interested, in
particular, in the widely used Lempel-Ziv algorithm which resides in this class. These
schemes achieve compression by replacing commonly occurring strings (listed in a
dictionary shared by the encoder and decoder) by indices or pointers to the strings. LZ77,
LZ78, LZW (Lempel-Ziv-Welch) [2], UNIX compress, and PKZIP are some of the
examples of this category.
The statistical schemes which include the popular Huffman codes and the arithmetic
codes are based on a statistical model of the source data. The design of a robust
compression system therefore entails designing an adaptive modeler for the data source.
This adds to the complexity of the compression code, but this is the price to be paid for
optimality. The Rice algorithm of NASA is another technique in this class, which assumes
several statistical models and utilizes several Huffman encoders simultaneously and
chooses the best performer on the fly.
The prediction schemes are based on well known voice coding schemes. The idea is to
exploit correlation in consecutive waveform samples to minimize the number of bits per
sample used in the digital representation of the waveform. This is done by predicting the
value of the next sample (based on previous samples) and digitizing the difference between
the predicted and actual values. As a consequence of correlatedness, this difference can be
expected to be small, thus fewer bits are required for representation. This difference
encoding scheme is then followed by one of the lossless schemes mentioned above.
(Bits/symbol) (1)
where Pi is the probability of the ith character in the source data and n is the source
alphabet size. The entropy can be shown to be a lower bound on the average codeword
length (in units of code bits per source symbol).
Equation (1) is useful only for measuring the information content in a memoryless
source because the probabilities that are used in this equation do not reflect any correlation
among the characters. The correlation between symbols tends to go to zero as the
displacement between the symbols increases. The definition of the entropy can be
modified as follows to incorporate the correlation among the characters (or memory in the
source) [3]:
(2)
where p is the order of the entropy and the memory order in the source. S is the set of
characters produced by the source and Sp is the pth order Cartesian product of S:
(3)
If H is converted to units of code bits per source bit, then this would be a lower bound
on the compression ratio1 for any lossless compression method. Thus, any compression
scheme yielding a compression ratio less than the entropy necessarily loses information
(this lossy compression scheme is acceptable in image and voice coding applications).
Table 2 shows the zeroth- and first-order entropies for the various files. In each case a
logarithm base of 256 was employed, yielding the units in information bytes per data byte.
(Recall log2 yields the units information bits per data byte.)
The Table 2 values show that one can only achieve compression to only about half of
the original size of the file. In other words the redundancy in the telemetry files are about
half of the original size of the files.
In this section, the compression results of all previously explained compression routines
are given. Some of the results are parameter dependent. For these cases, only the best
results are presented. In each category, we have highlighted the routine or routines giving
the best compression efficiency by the symbol i.
All of the routines read 8-bit words unless it otherwise is explicitly noted. For
comparison purposes, first-order entropies are given in each table in parenthesis under the
file name.
The ad-hoc (run-length) routines listed in Table 3 were simulated. Programs are written in
the C programming language by using the Borland C v3.1 compiler and run under the
MS-DOS environment. The compression ratios and first-order entropies are summarized in
the table.
1
The compression ratio is defined to be compressed file size divided by the original file
size.
- ZRL : Zero Run-Length Coding [6]
- STMRL(s0,s1) : Shifted Two-Mode Run-Length Coding [6]. (All samples are shifted
to the left to make the most probable sample value equal to zero.)
The shift values for files outch1 through outall are 133, 123, 121, 0 and 121
respectively.
We observe that only the shifted two-mode schemes provide any real compression, but
only for the original, uninterleaved files.
The statistical routines listed in Table 4 were simulated. The Rice algorithm compressor is
the simulation program for Advanced Hardware Architecture’s AHA3370 Rice Chip
(obtained from AHA). The notation in the table is defined as follows:
We observe from the table that 3rd-order arithmetic coding provides the greatest
compression, with Rice(512) coming in second.
The dictionary routines simulated are listed in Table 5 together with their compression
results. The notation in the table is defined as follows:
We observe immediately that the DOS program 1z15v and the UNIX program
compress utilize roughly equivalent, algorithms and yield the best performance.
Type Of Compression Outch1 Outch2 Outch3 Outch4 Outal1
(0.351) (0.339) (0-323) (0-136) (0-308)
LZSS 0.513 0.508 0.498 0.268 0.499
LZ12 0.414 0.406 0.397 0.951 0.626
iLZ15v 0.405 0.401 0.387 0.266 0.412
iUNIX COMP 0.405 0.401 0.387 0.266 0.406
PK2.04g 0.423 0.423 0.388 0.235 0.407
ZOO210 0.424 0.412 0.398 0.269 0.427
iHA098 0.406 0.412 0.385 0.225 0.398
The MMSE (Minimum Mean Squared Error) technique can be divided into two parts. The
first part is calculation of the coefficients of previous samples to predict the current value
by reading the source file entirely or partly. The second part reads these coefficients and
calculates the residue sequence and applies the bi-level coding scheme rules to compress
the residue sequence. The interleaved file outall4.dat in which the MMSE prediction
algorithm gave residue values outside the -127 to 128 range (i.e., more than 8 bits). The
other four data files produced residue sequences which required only 7 bits per residue to
represent.
The MMSE predictor/residue algorithm was combined not only with Stearns’ bi-level
coding scheme [5], but also with a Lempel-Ziv scheme (lz15v) and a 3rd-order arithmetic
scheme (arith-n). Table 6 lists the compression results for these combinations.
is used, where pk is the estimate for the current value, sk-1 is the previous sample to current
k - th sample and Wk = (M1,k + M0,k) /2 where M0,k = M1,k - 15 and
(4)
The idea behind this modified “mean” Wk is to “hard limit” abrupt changes in the data.
The arithmetic compression routines do not depend on the size of the input word, and
hence the 6- and 7-bit residues are not problematic for this algorithm. The Huffman
algorithm shares this characteristic, but the Lempel-Ziv and Rice algorithms must be
modified to accommodate the these shorter input word sizes. The AHA version of the Rice
algorithm was designed to allow input word sizes ranging from 4 to 14 bits. For Lempel-
Ziv coding, the program lz15v was modified to accept 6- and 7-bit inputs (the arguments in
parenthesis show the bit number). The compression results are given in Table 7.
Despite good results, this simple predictor is still inferior to the Stearns technique as
expected which, in turn, is inferior to the lz15v algorithm by itself as seen in Table 8.
Because it was not convenient to run the algorithm on the microcomputer board for which
this algorithm is targeted, we tested it on a few different platforms (and compilers). In
Table 9 run-time results are given in seconds for selected effective compression routines.
These routines were run under a 8MB RAM 486 PC with the Borland C compiler and two
different UNIX environments: NMSU hosts Dante and Paris. Dante is a Sun 4/670 and
Paris is a IBM RS/6000 workstations The values in the table are rounded to the nearest
Type Of Compression Outch1 Outch2 Outch3 Outch4
(0.351) (0.339) (0.323) (0.136)
LZ15v(6) NA 0.512 NA NA
LZ15v(7) 0.457 0.514 0.467 0.328
LZ15v 0.460 0.517 0.469 0.329
RICE 0.865 0.875 0.865 0.869
i3RD ARIT 0.397 0.457 0.412 0.243
AD-HUFF 0.483 0.494 0.531 0.458
second. Thus, an entry of 3 * 2 * 1 in the table indicates that the program took 3 seconds to
run on the DOS machine, 2 seconds on Dante, and 1 second on Paris.
As the results show, the Lempel-Ziv routine not only gives among the best compression
ratios as demonstrated above, but also takes the least time to compress the source data.
The purpose of this study was to find a real-time and lossless compression routine that
gives the best compromise between compression speed and efficiency, with applications to
telemetry data.
The ad-hoc run-length coding routines gave only moderate compression ratios at best.
Among the statistical routines, the 3rd-order arithmetic performs the best, but its
compression speed is quite slow. The Lempel-Ziv (dictionary) routines gave compression
results that are close to the best possible. In particular, lz15v, compress, and ha098 give
the lowest compression ratios after 3rd-order arithmetic compression routine. We point out
that UNIX’s compress gave the fastest compression among the three when all of them are
run in a UNIX environment. The complex linear prediction methods performed quite well,
but run slowly and usually yielded compression ratios inferior to that of the Lempel-Ziv
routines.
7. Acknowledgment
This work was funded by Sandia National Laboratories under grant number AM-1922.
References
[3] T. Bell, J. Cleary, and I. Witten, Text Compression, Prentice Hall, Englewood
Cliffs, NJ, 1990.
[6] Omer F. Acikel, Lossless Compression of Telemetry Data, Technical Report, New
Mexico State University, 1995.
CAPTURE METHOD FOR SPREAD SPECTRUM ALOHA SIGNALS
ABSTRACT
KEY WORDS
INTRODUCTION
The dual-purpose satellite systems were developed that would handle message and supply
positioning information after Professor G. K. O'Neill presented dual-purpose satellite
concept in 1982[1]. Radio Determination Satellite Service (RDSS) is coming from that
concept. It operates at radio frequencies allocated by the International Telecommunication
Union (ITU) and is licensed in the United States by the Federal Communications
Commission (FCC). In China, similar system called as communication and positioning
based on two satellites (CPTS) has been developed since 1986. The two trials of CPTS
system were done in 1989 and in 1995 respectively. The system similar to CPTS have also
been developed in other countries[2-4]. CPTS uses the spread spectrum (SS) for measure the
distance from the users to two satellites and uses slot-ALOHA protocol to send the
response message from users' transceiver according to beginning of the subframe from the
center. In this paper, we present a SS-ALOHA multiple-access protocol used for CPTS.
SS-ALOHA is the differ from the spread-ALOHA suggested by Dr. N. Abramson in
University of Hawaii. SS-ALOHA concept means that a number of users use the same SS-
encoding sequences, the same carry frequency and random arrival time with slot-ALOHA
access. In SS-ALOHA system, it is not necessary to use the different spreading sequences
for different users, but the new synchronous code must be built in order to get identical
word of packet and larger processing gain. In this paper, we present to combine
concatenated sequence with Barker code for this aim and capture method for synchronous
code by means of SAW. The experiment of capture SS-ALOHA signals had been made by
using the SAW-MFC in the Lab. A few results of correlating signal of the experiment will
be given in next section.
The configuration of CPTS is based on construction of RDSS. The space segment has two
satellites, and the ground segment of system consists of one controlling and data
processing center (also call as center station) and near million users. According to timing
protocol of CPTS, users must send their message in beginning of the subframe
broadcasting from center. It is the basis of measuring distance from users to two satellites.
Gotten distance between users and satellites, the host computer in center can calculate
users’ altitude and longitude, and then insert position information into the broadcasting
superframe to transmit to one of two satellites, and repeat the information to the ground in
order to be received by user. If the CDMA scheme is used in the system, the center will be
required to build a number of correlators or match filters so that the different users can be
captured by using different correlators. Certainly, one programming correlator can be used
for many users, but the time of capture will delay for long. So we present to use the same
PN sequence for different users’ signal. As we know, the packet of user consists of
synchronous head, information segment and error-correcting segment. In this paper, only
synchronous head is considered. The overlapping of packets does not mean losing packets
by using the spread spectrum for users in slot-ALOHA channel. Center can distinguishing
the overlapping signal sent by users in different time of arrival. From this point, we find
that the signals of users have a packet format in ALOHA channel and have spread
spectrum characteristics so that the communication protocol is called as spread spectrum
ALOHA.
Spread spectrum ALOHA is differ from the spread ALOHA presented by N.Abramson in
1985. Both of them have similar performance in channel capability. Spread ALOHA show
that each bit of packet must be firstly spread in time and then to be modulated by PN
sequence, but SS-ALOHA will maintain a data rate and directly modulate by PN. If we use
the signal dimension theorem[5], the signal can be presented in D=2BDTD signal space.
Where BD is the band of signal and TD is the time of signal. Signal energy in out of TD is
zero. The same signal can be presented in the high dimension space n=2BSTS again and
n>D, where B is the band and TS is the time of spread spectrum sequence. Through the
low dimension (D- dimension) signal imbedded in a high dimension (n-dimension) space,
the processing gain GA can be gotten, GA =2BSTS /2BDTD. We can find that the same result
large time-band can be obtained by means of spread the signal band or spread time of
signal. That is a different point between SS-ALOHA and spread ALOHA.
SYNCHRONOUS CODE FORMAT AND CAPTURE METHOD
The single concatenated sequence, like sequence AAAA... (where A means PN sequence),
was chosen as the synchronous code in first trial of CPTS of China in 1989. A
synchronous code consists of many m-sequences concatenated simply. That code used in
trial, however, it is shown that synchronic time of packet arrival cannot be fixed because
that random correlation peak time can be emerged by means of SAW circle adder. The
result changes with the different threshold. The old synchronous code cannot be used for
radio determination or distance measure. In order to get accurate correlation peak, timing
of system and large processing gain, the fixed length synchronous code must be chosen.
Concatenated sequences[6] are defined as combinations of two sequences such that each bit
of one sequence is further encoded by another sequence. In this paper, new concatenated
sequences consist of three layers encoded sequences. First sequence is designated as an
"outer" encoded sequence. The second sequence is used to encode each bit of the first as
the "middle" sequence. The third sequence is called the "inner" sequence. To be clear, the
process of encoding sequences is shown in Fig.1. "Outer" sequence is Gd1Gd2Gd3G (d1≠d2
≠d3, a chip of PN sequence as a unit), "middle" sequence-G made of four sequence-As is
Barker sequence and encoding pattern 1 -1 1 1. The "inner" sequence-A is PN sequence.
We chose m-sequence in experimental system, in fact that M-sequence, Gold sequence
and the other PN sequence can be used for the practical spread spectrum systems. In
reference[6], some good suggestions have been given about why to take the shorter
sequence rather than the longer one in spread spectrum receiver, but only the processing
gain of 60 (17.8dB) has been obtained so that it does not meet needs for CPTS system
operating under the low signal-to-ratio. Here we consider the longer L>2048 chips
sequence and three layers concatenated sequences. The aim of finding a new concatenated
sequence is to obtain the performance of a long sequence in the CPTS system while using
the shorter MFC, thus simplifying the receiver implementation and decreasing the time of
acquisition. On the other hand, if the long sequence as 2048 chips may be used, it is
feasible in theory, but the MFC as 2048 chips is not easy to be made, especially
programmable SAW-TDL (Surface Acoustics Wave Time Delay Line). To realize the
CPTS system and reduce cost of system, the shorter SAW-MFCs should be used to the
practical spread spectrum system. The concatenated sequences presented here are taken as
the synchronous code of the SS-ALOHA packet, only a small part of PN signals for the
CPTS system. A scheme for generation and acquisition of concatenated sequences is
shown in Fig. 2. We choose the SAW-DTL as MFC. The "inner" and "middle" sequences
are matched by a few SAW-DTL, then the "outer" encoding sequence is matched by the
digital shift registers. The length of the concatenated sequence is as follows:
L=(L1+4)+d1+d2+d3
The three processing gains in theory are given by three stage's MFC, they are
G1=10lgL1
G2=10lg4
G3=10lg4
If we let L1=255, G1=24dB, the processing gain in all is 36dB.
AN ACQUISITION SYSTEM
As shown in Fig.3, an experimental system for fast acquiring synchronous code mainly
consists of one programmable SAW-DTL, three fixed SAW-DTL, four long digital shift
registers (DSR) and local PN codes generator, etc. The signal input to programmable TDL
is a bi-phase synchronous code modulated by 70 MHz IF. The local PN code generator
generates only sequence-A and A (reverse sequence-A) in itself, and keeps it in each tap
of programmable DTL. When the concatenated sequences arrive at programmable DTL
and a complete "inner" sequence A enters DTL, the auto correlation peak will appear,
then it enters the first fixed DTL and the sequence-A enters completely programmable
DTL, the negative auto correlation peak or reverse phase peak can be obtained. Output of
adder is zero, the reason is that negative peak output of programmable DTL adds position
peak output of the first fixed DTL. Until the last sequence-A of AAAA as "middle"
sequence enters completely the programmable DTL, the four auto-correlation peaks
simultaneously appear in input of adder and the output of adder is about four times as
amplitude as the single peak. The envelope signal of the peak can be obtained by envelope
detector. Envelop signal must be compared with threshold level so that the better detection
probability Pd and the lower false-alarm probability Pf are obtained. After output signals
of detector enter DSR, they will be delayed different time and the output signals of four
DSR's taps enter adder into the largest pulse. The output signals are shown in Fig.4-c. The
second decision will turn out a synchronous pulse to the local code generator and the
tracking loop so that the CPTS realizes synchronization and tracking. The wave forms of
each output nod of fast acquisition system are shown in Fig.4. Now we simply discuss if
the sequence G is not encoded by Barker sequence rather than sequence AAAA, what is
the result of correlation? The correlating process and auto-correlation peak are shown in
Fig.5-a, the largest side peak is three times as amplitude as a single one. The ratio of side-
to-main peak is 3/4. If the sequences are encoded by Barker sequence as given above in
this paper, the largest ratio of side-to-main peak is 1/4. The correlating process and the
auto correlation peak are shown in Fig.5-b.
The reason to choose d1≠d2≠d3 is that the signals always add coherently and the noise
adds incoherently, which results in reducing random noise and improving (S/N)in of the
receiver of spread spectrum system.
RESULTS
This section gives the used parameters of the experimental system in Fig.3 and the parts of
the experimental results. The experimental parameters are given as follows:
The length of m-sequence A is L=255;
d1=4; d2=6; d3=2;
The clock of PN sequence generator is Rc=10MHz;
The medium frequency is Fc=70MHz;
Under the input (S/N)in= -15dB, we got the (S/N)in of every output end of experimental
system as follows:
( S / N ) a = 6.5dB
( S / N ) b = 1136
. dB
( S / N ) c = 17.37dB
The processing gain of every output and the total processing gain Gtotal are as follows:
Ga = ( S / N ) a − ( S / N ) in = 215
. dB
Gb = ( S / N )b − ( S / N ) a = 4.86dB
Gc = ( S / N ) c − ( S / N )b = 6dB
and
Gtotal = Ga + Gb + Gc = 32.36dB
The acquisition probability Pd of system is 0.9464, and the false-alarm probability Pf is
0.0091. The acquisition time of the experimental systems Tac=0.80ms.
CONCLUSION
The SS-ALOHA signal, capture method and acquisition system presented above will be
used in the CPTS in China. The advantages of it are not limited in the area discussed in
this paper, the concatenated sequences can be used to distinguish overlapping PN signals
in spread spectrum system for further improving the multiple access ability of system. We
believe that the concatenated sequences will be also used for the personal communications
network (PCN) and the global mobile communications network (GMCN).
REFERENCES
[1] G.K. O'Neill, Satellite-Based vehicle position determining system, United States
Patent No.4,359,733, 1982.
[2] S. G. Johannsen, Satellite mobile communication and radio positioning system planing
aspects, IEEE Trans on AES, vol.24, no.4, July 1988, pp387-396.
[3] R. Braef, Ranging and processing mobile satellite, IEEE Trans on AES, vol.24, no1,
Jan. 1988, pp14-22.
[4] E. Morikawe, et al, Performance of communications & radio determination system
using two geostationary satellites, AIAA-92-1814cp, 1992, pp84-91.
[5] M. Wocencraft and Irwin Mark Jacobs, Principles of communication engineering,
John Wiley & Sons, Inc. New York, 1965.
[6] S. L. MASKALA, J. DAS, "Concatenated Sequences for Spread spectrum system",
IEEE Tran. On Aero. and Elec. Systems, Vol. AES-17, No.3, August 1989.
Gd 1 Gd 2 Gd 3 G
inner seq- middle outer
quence A encode encode
G d1 G d2 G d3 G clock
generator
(a)
Fig.1 SS-ALOHA synchronous code Fig.2 (a) Generator of SS-ALOHA synchronous code
(b) Capture scheme of SS-ALOHA synchronous code
con. seq.
programable a fixed fixed fixed
input SAW-DTL DTL 1 DTL 2 DTL 3
reverse
phase
local A
4
generator 3
2
1
circular
A A A A
addition
detector b adder
No. 1 No. 1 A
(a)
4
(b)
adder c thresh output
No. 2 > pluse
Fig.3 Block diagram of the experimental system Fig.5 (a) Combining sequences correlation
fast acquisition of SS-ALOHA synchronous code (b) Barker encoding sequence correlation
1 1 -1 1 1 1 -1 1 1 1 -1 1 1 1 -1 1L
111 11 1
c
b b b
end start
Fig.4 Output wave forms of the experimental system (L=255b, b is a chip of PN sequence)
STATE MODELING AND PASS AUTOMATION IN
SPACECRAFT CONTROL
ABSTRACT
The Integrated Monitoring and Control COTS System (IMACCS) was developed as a
proof-of-concept to show that commercial off-the-shelf (COTS) products could be
integrated to provide spacecraft ground support faster and cheaper than current practices.
A key component of IMACCS is the Altair Mission Control System (AMCS), one of
several commercial packages available for satellite command and control. It is
distinguished from otherwise similar tools by its implementation of Finite State Modeling
as part of its expert system capability. Using the Finite State Modeling and State
Transition capabilities of the ALTAIR Mission Control System (AMCS), IMACCS was
enhanced to provide automated monitoring, routine pass support, anomaly resolution, and
emergency “lights on again” response. Orbit determination and production of typical flight
dynamics products, such as acquisition times and vectors, have also been automated.
KEY WORDS
INTRODUCTION
IMACCS grew out of the Goddard Space Flight Center (GSFC) Mission Operations and
Data Systems Directorate (MO&DSD) Reusable Network Architecture for Interoperable
Space Science, Analysis, Navigation, and Control Environments (RENAISSANCE) effort.
The purpose of RENAISSANCE is to re-engineer the process by which the MO&DSD
builds and operates ground data systems, so that these processes become faster, cheaper,
and more flexible than has been the case. The use of COTS products was one area selected
for investigation. To demonstrate the feasibility of this approach, the IMACCS team
undertook and accomplished the task of building a working COTS-based ground support
system in 90 days. The original system mirrors the existing ground support system for the
Solar, Anomalous, and Magnetic Particle Explorer (SAMPEX) spacecraft. During the
IMACCS implementation, the potential for significant automation of ground support
became apparent, and the addition of extensive automation was a major component of the
second phase of development. This paper describes the main tool used in that process,
finite state modeling, as well as the results of the automation process itself. A detailed
description of IMACCS can be found in Scheidker, et. al., 1996.
The levels of the state hierarchy must be meaningful to the user. Systems called out to the
same level should have the same degree of interest to the person monitoring their status.
The states designated for a system should likewise be conditions of interest to the user. So,
for example, if a coil can have current flowing in either direction, that can define one state
or two, depending on whether or not the direction of current flow is considered significant.
Consider, for example, the SAMPEX attitude control system (ACS) functional model. The
ACS state is determined by the states of its 2 main subsystem - the sensor and actuator
subsystems. These, in turn, are determined by the states of the individual sensors and
actuators, which are determined by the values of appropriate telemetry items. The Altair
Design Reference Language (ADRL) defining the ACS functional model is illustrated in
the Appendix.
While the functional state provides information on the health of a spacecraft subsystem, it
says nothing about the actual performance of that subsystem. We can know that wheel
speeds, torquer currents, battery voltages, etc., are all within nominal limits and yet have
little information about the actual operation of the spacecraft. This knowledge gap is filled
by the use of dynamic state modeling, whereby parameters relevant to the flight dynamics
of the vehicle describe such items as position, velocity, attitude, attitude motion, and
control modes. For IMACCS, the ACS dynamic model describes the system in terms of
lighting, magnetic field, and control mode.
STATE TRANSITION
The use of state modeling for data monitoring facilitates the use of another powerful
process, State Transition Modeling, to automate spacecraft control. This technique models
commands as transition vectors between two vehicle states. These can be initiated
manually, time-based (typically for routine pass activities), event-based (for reacting to the
detection of a specified state, expected or not, by initiating a transition to a different state).
Transitions are defined in an object-oriented database and typically comprise verification
of initial (entry) state, constraint checking, the transition vector, and verification of
success. A transition failure vector, i.e., action to be taken should the target state not be
achieved, may also be defined, as may a transition success vector and an entry failure
vector. The latter initiates a course of action if the system is not in the proper entry state
for the desired transition. The transition vector itself may be a string of commands or a
procedure written in the ALTAIR Procedural Automation Language for Spacecraft
(PALS).
ADVANTAGES
State modeling uses the computer to do what it does best, liberating humans to do what
they do best. Spacecraft data monitoring has typically been effected by one or more people
observing several displays. Before launch, engineers must decide upon a very small subset
to be monitored from among the thousands of individual telemetry items available. This is
modified as operational experience provides more insight to the vehicle’s inflight
performance, as hardware degrades, or as the mission’s priorities are changed. Moreover,
the importance of any one item’s value may be dependent upon several other parameters.
The operations engineer must either know what combinations of readings are meaningful
or change limits based upon known or anticipated operational modes. In effect the
engineer carries a state model in her head (or on paper), and evaluates the state of the
spacecraft according to this model. Since this is done by visual inspection of displays, it
can perhaps be done once or twice per minute. This is a singularly inefficient use of the
human brain. It is also risky, since potentially important short-term events might be
missed. (Conversely, however, it should be noted that people are better able to recognize
an unimportant transient event for what it is, probably the major weakness of the state
recognition process. This can be overcome by combining the state recognition process
with a rule-based expert system. See below.)
The use of state modeling eliminates these problems. The state recognition process
monitors every telemetry item and determines the vehicle state for every telemetry frame.
Thus, nothing is missed, the danger of human error is minimized, and the engineer’s time is
freed for more useful activities.
Together with the state transition process (see above), the state recognition process also
provides significant opportunity for automation. Moreover, the similarity of the process to
the normal human mode of incorporating experience facilitates the evolution of the
monitoring and control system. Operational experience always results in the modification
of monitoring and control activities. In the context of state modeling, states of interest not
described pre-launch are identified and included in the model. In addition, operations
engineers continually identify ways to improve procedures and automate processes. State
modeling and transition are intuitive processes that facilitate rapid changes, requiring
training in neither programming nor expert systems. These concepts are revisited and
illustrated with examples below.
CONCERNS
The main concern in using finite state modeling for monitoring and control arises from its
single-frame nature. A single erroneous telemetry value can cause the system momentarily
to indicate an undesirable state, and, for example, one would normally not want to place
the spacecraft in safehold based on a single reading. This problem is addressed in part by
data quality checks, which help increase confidence in actions taken in response to
anomalous data. In addition, the anomaly can be required to persist through several
telemetry frames before action is taken. The state transition process operates in
conjunction with a rule-based expert system, or “inference engine,” which executes the
actual transitions. The inference engine provides the capability to require persistence of an
undesirable state before action is taken. It also provides more sophisticated trending
capabilities, giving the engineer a great deal of flexibility in defining responses. This area
is currently being addressed in depth by the IMACCS team.
AUTOMATION
Operations automation can be divided into 5 classes: data monitoring, routine pass
activities, anticipated contingency response, emergency response, and product generation.
The IMACCS prototype demonstrates the feasibility automation in all of these areas.
1. DATA MONITORING
The first application of state modeling in IMACCS was the automation of routine data
monitoring, as described in Section 3. The IMACCS team, consulting with subsystem
engineers and Flight Operations Team (FOT) members, modeled the SAMPEX vehicle
according to its major subsystems -- Power, Thermal, Communications and Data Handling
(C&DH), Experiment, ACS, and Mechanical. The state of each of these, in turn, depends
on the states of its own subsystems, and so on. Note that state modeling permits all manner
of system decomposition. For example, we have modeled SAMPEX as the sum of Attitude
Control, Power, Instruments, Flight Software, etc. An alternate decomposition might be
Science Operations, Bus Operations, Communications, etc. The key is that the
decomposition needs to make sense to the user (operator, scientist, engineer). Several
alternatives can be defined simultaneously. When the model is complete, the operations
engineer no longer has to monitor, for example, the speed of the reaction wheel. Instead,
she simply observes the state of the actuator subsystem. Even this, however, is
unnecessary, as long as the state of the ACS is known and desirable. This is leading, of
course, to the observation that only one parameter, the state of SAMPEX, must be
watched. It is only when this is unknown or undesirable that any other states must be
investigated. In fact, as shall be seen, there is ultimately no need for a human to monitor
even this one item.
Experience has shown that, after launch and early mission checkout, 95% of operations are
completely routine. For SAMPEX, besides data monitoring, a typical pass involves an
initial communications check, accomplished by sending a “no-op” command, dumping an
onboard table, and loading an updated table. All these can be scheduled as time-based
state transitions. To load the table, for example, the state transition process first verifies
the correct entry state, “TABLE_INACTIVE,” and then executes the transition to the
target state, “LOADING,” identified by values of the onboard command counter, a
“TABLE_ACTIVE” value in the telemetry, and several other telemetered status indicators.
The actual transition vector is a PALS procedure consisting of three spacecraft commands
needed to load the table. Upon verification of success, the success vector is executed. This
is a PALS “MonitorStateEntry” command instructing the state transition process to
perform the transition “TABLE_COMMIT” (write from the buffer to the table’s actual
memory location) upon verifying attainment of the state “TABLE_LOADED.” The table
dump transition is similar.
Failure vectors can be executed if the subsystem is not in the expected entry state at the
scheduled time, or if the transition fails to achieve the target state. For IMACCS, failure
vectors are simple console messages and log recordings. They could, however, execute
some corrective action or notify appropriate personnel of the failure.
The state modeling and transition processes are well suited to automate such a response. It
is a simple matter to configure the state transition process to monitor the HILT state and
transition to the state “NOMINAL” upon detecting the state “FLOWREG_OPEN.” The
transition vector is a single spacecraft command. While this example is quite simple, the
principle easily applies to any situation where there exists a documented response to
known anomalous state.
4. EMERGENCY RESPONSE
With proper application of state modeling techniques as described above, the ground
support software can support the mission without human intervention, through routine pass
activities and known contingencies. This is not enough, however, to allow complete “lights
out” operation. The system must also be able to recognize a condition for which it is not
prepared, and take appropriate action, which will always include calling for human
intervention. In IMACCS, this is demonstrated by simulating an ACS anomaly --
specifically a Digital Sun Sensor (DSS) failure. The state transition process, monitoring the
DSS for the state “X_DEAD,” executes the transition
“REACT_DIGITAL_SUN_SENSOR_FAILURE.” The transition vector is a PALS
procedure that in turn executes several other scripts. The first is an EXPECT* script that
dials a pager and sends a coded message. Next, a UNIX script invokes the AMCS
archiving and playback functions to retrieve the latest 24 hours of data relevant to the
failed system and enter these into a trending tool (BBN Probe) in order to have plots
waiting when the engineer arrives. The capability exists, of course, for far more
sophisticated responses.
5. PRODUCT GENERATION
*
EXPECT is a public domain program, implemented in the Tool Command Language (TCL). See, for example, Welch,
[1995].
Corporation, as a tool to accomplish this. Using a simple PERL script as an executive, and
taking advantage of the UNIX “cron” function, which will initiate a process at a preset
time, and running STK under XRunner, the team has fully automated routine orbit product
generation.
CONCLUSION
ACKNOWLEDGMENTS
The work described in this paper was accomplished at the Goddard Space Flight Center
(GSFC) under contract NAS 5-31500. The IMACCS effort was carried out primarily
under the inspired leadership of Mike Bracken, formerly of NASA/Goddard Space Flight
Center (GSFC) and Gary Meyers of GSFC, leader of the RENAISSANCE effort. Sue
Hoge of GSFC directed much of the automation work. Bob Connerton of GSFC also
provided direction and did much to publicize the effort.
APPENDIX
Figure A-1 shows portions of the ACS functional state model. Some states and items are
incomplete due to space considerations. They are analogous to other states and items.
Complete models can be obtained from the authors.
Altair Corporation, “Finite State Modeling and Procedure Automation Language for
Spacecraft,” Altair-CML-01-05B-DA-SUPP, Bowie, Md. 1994.
Lin, David, Sue Hoge, Jim Klein, and Rex Pendley, “Use of XRunner for Automation,”
Proceedings of the International Telemetering Conference, San Diego, California, October,
1996.
Welch, B., Practical Programming in TCL and Tk, Prentice Hall, New Jersey, 1995.
AN OPERATIONAL CONCEPT FOR A DEMAND ASSIGNMENT
MULTIPLE ACCESS SYSTEM FOR THE SPACE NETWORK
Stephen Horan
Center for Space Telemetering and Telecommunications Systems
The Klipsch School of Electrical and Computer Engineering
New Mexico State University
Las Cruces, NM 88003-0001
[email protected]
ABSTRACT
An operational concept for how a Demand Access Multiple Assignment (DAMA) system
could be configured for the NASA Space network is examined. Unique aspects of this
concept definition are the use of the Multiple Access system within the Space Network to
define an order wire channel that continuously scans the Low Earth Orbit space for
potential users and the use of advanced digital signal processing technology to look for the
Doppler-shifted carrier signal from the requesting satellite. After the reception of the
signal, validation and processing of the request is completed. This paper outlines the
concept and the ways in which the system could work.
KEY WORDS
INTRODUCTION
As part of an overall study to increase access to NASA’s Space Network (SN), we have
been considering options for a Demand Assignment Multiple Access (DAMA) system as
an additional capability to the existing SN usage modes. This DAMA system would be
used to request communications services on the Tracking and Data Relay Satellites
(TDRS) within the SN. Currently, nominal user services are pre-scheduled to allocate
service type and required equipment to a user of the SN. The availability of a resource for
a given user is then communicated back to the user prior to the requested service time. The
current system does not easily allow real-time changes to add new users to the service
schedule. This is particularly important as spacecraft become more autonomous and will
have the capability, with on-board error detection logic, to request interactions with their
controlling ground station. The goals of this project are to improve the current operational
modes as follows:
1. Use of a DAMA processor will convert those parts of the schedule activity to a
computer-controlled algorithm which will require less human intervention,
2. The development of a DAMA capability will provide small users with the perception
that the process is easier to work and that schedule availability will be provided, and
3. The DAMA process will, by definition, provide scheduling flexibility over the current
schedule-driven system.
The remained of this document presents a baseline concept for how this DAMA structure
might be accomplished. When realized, the DAMA concept would allow scheduling of
communications services on the TDRS spacecraft utilizing both the Single Access (SA)
antennas at Ku-Band and S-Band and the S-Band Multiple Access (SMA) antenna system.
ASSUMED ENVIRONMENT
The baseline concept for the DAMA system is to restrict nominal usage to small projects
without continuous or nearly-continuous coverage and not to have every user of the Space
Network utilizing the DAMA system. For example, large projects with known schedules,
e.g., Shuttle or HST, would continue with normal project scheduling because they know
well in advance what their usage requirements are and have well-defined project
schedules. Rather, we would expect the majority of the DAMA users to be from the small
satellite community or non-traditional users having relatively low-rate command and data
requirements, and are not in a permanent or full-time operational mode for their project.
This does not mean that large projects could not utilize the DAMA scheme for emergency
services. In effect, the DAMA users would primarily “fill the gaps” between contacts
scheduled by projects. In the DAMA mode, we expect to treat the user spacecraft and the
user Payload Operations Control Center (POCC) to be equal players in requesting
services. This will allow for both normal operations and contingency or emergency
operations. In this mode, the DAMA request can come to the scheduling office via either a
space channel or from a land-based channel.
We also anticipate that the DAMA users would like to see an “Internet-type” access to
data or to send commands. This would mean that the scheduling requests from either the
user POCC or the user spacecraft could look like an electronic mail to the service
scheduler. The service requests would be entered using a standard format and service
request acknowledgment or request negative acknowledgment would return to the
originator by the same route. If the user spacecraft is the service request originator, then
the user POCC would be brought into the loop for verifying and approving the service
request. It is anticipated that the user would receive spacecraft data from the SN using
either direct routing or an anonymous ftp-type of service. The DAMA system is envisioned
to utilize the TDRS SMA communications channel to establish services and the SA or
SMA antenna systems to actually support forward and return data flow. The DAMA
system would require the establishment of a new type of data service but would not change
the spacecraft antenna configurations in any manner. The DAMA system being
investigated here would require minimal modifications to the ground station at the White
Sands Complex (WSC).
BASELINE CONCEPT
The proposed DAMA system will utilize both the S-band Multiple Access (SMA)
capabilities and the Single Access (SA) capabilities of the Space Network. The SMA
services are provided by an array of helical antennas mounted on the body of the TDRS
spacecraft and operated as an electronically-steered phased array. The SA services are
provided through one of the two gimballed dish antennas on the TDRS. The DAMA
system will utilize the SMA system to transmit requests to a service scheduler for forward
and or return communications services on either the SMA or SA antenna systems. In this
concept, we allow for the user spacecraft as well as the user POCC to be the initiator of a
service request. The DAMA service request would be initiated by the sending of a
standardized service request packet from either the user spacecraft or the spacecraft
control center. A conceptual sketch of such a packet is shown in Figure 1. The packet
could be encapsulated inside of another Protocol Data Unit (PDU), for example, a TCP/IP
packet, for channel transport. It could also be issued by the user spacecraft over a radio
link or encapsulated an part of a Consultative Committee for Space Data Standards
(CCSDS) virtual channel data stream where the virtual channel identifier is set to signal a
DAMA request for service. This is illustrated in Figure 2.
Acqui- Synch Requested Service-Type
sition Header Trailer
Code Priority Request
Code
7 to 65542 octets
CCSDS Telemetry
Packet Header DATA Segment
#1115 octets
It is expected that the data and trailer fields in such a packet could be kept to well under
1000 bits. The length of the synchronization code will need to be determined based on the
receiver acquisition characteristics.
The on-orbit DAMA request would be received via the SMA link which we will term as
an “orderwire channel”. The current configuration for the return MA service is to use it in
a spot beacon mode as shown in the left half of Figure 3. The spot beacon is formed
electronically using the 30 array elements on the TDRS spacecraft. The user ephemeris is
used by the ground station processing equipment to predict the position of the spot beacon
and to track the movement of the user spacecraft. In the return service signal processing
electronics at the WSC ground station, weighting factors are added to the return signal
from each SMA antenna element to provide the correct phase addition to form the spot
beacon. One set of weighting factors and associated signal processing equipment is
required for each user being serviced by a TDRS within the SN. For the DAMA system,
we propose configuring the SMA phased array to provide a global beacon from each
TDRS in the SN an shown in the right half of Figure 3. This would be accomplished by
utilizing the output from just one element of the phased array before it is processed through
the beam-forming hardware at the WSC. This implies that no additional beam forming
hardware would need to be dedicated to the DAMA system and taken from supporting
normal SN users. Instead, a special receiver will be used to process the DAMA signals as
they are stripped from the normal SMA processing system. Extensive simulation [1] of the
resulting antenna patterns generated from a single array element shows that
1. A single element can cover the entirety of the LEO orbital range on the hemisphere
facing the TDRS,
2. Using multiple array elements will give a smaller coverage pattern than that desired
3. Using all 30 array elements cannot produce the desired coverage.
Therefore, we believe that the single element approach will be capable of providing the
necessary antenna pattern for the establishment of the orderwire channel. One element
from each TDRS (East and West) will then cover the whole earth for LEO spacecraft
except for the SN zone of exclusion over the Indian Ocean.
Current MA Return Coverage Proposed DAMA MA Return Coverage
Figure 3 - MA Spot Beacons for MA Services (left) and DAMA orderwire channel
(right).
The MA service channel operating as an orderwire would operate with an Aloha protocol
and all users having a common PN code for the orderwire and a different PN code for data
transfers. This latter code would follow the normal code assignment as in the current
scheduled system. The orderwire PN code would require the reservation of a standard user
PN code in the system for all DAMA users to utilize. The reason that an Aloha protocol
and a single PN code can be used on the orderwire channel is that the users of the
orderwire will be accessing it only during the short duration of the reservation requests
thereby making the probability of reservation packets colliding low. The PN code will
make the reservation packets orthogonal to the normal SMA data traffic and the
orthogonality properties of the code will make each reservation packet orthogonal to all
other reservation packets except for a short duration of a few chip times each code period.
It is assumed that these short durations of vulnerability to collisions will not result in a
meaningful number of request packet collisions due to the low probability of multiple
request packets arriving at exactly the same moment. The SMA forward link would be
used to answer service requests over the orderwire using a time-division multiplexing as is
done in the system under scheduled service. The user of the DAMA orderwire channel
would need to have a time-out programmed so that if a service scheduling message either
confirming or denying the request is not received within a specified time, for example, five
minutes, then a new service request packet would be issued.
The DAMA request packet from any source would flow into a DAMA service computer
an shown in Figure 4. This computer would contain a data base for each legitimate user
satellite with the following types of information:
The DAMA processor will check for requested-service availability based on the pre-
scheduled services, the service request type, and the user’s priority in the system. If the
service request can be honored, as requested, the WSC configuration is established via the
normal scheduling mechanism requesting the services. A copy of the service request
information is made available to the spacecraft POCC for concurrence and any necessary
SN management entities.
The POCC is expected to be remote from the WSC and will need to communicate with the
WSC via a commercial-grade communications network. This communication network
would be used for transmitting service requests to the DAMA processor at the WSC. It is
expected that this configuration will require the installation of a firewall to provide
insulation between ground communication network and the WSC. The functions of this
firewall are expected to be an follows:
MA Processor
User 1 User 1
White Sands User 2
Complex
SMA
DAMA DAMA
Rx
The largest technology development issue associated with this concept is the tracking of
the user carrier frequency in the WSC signal processing equipment when the SMA
orderwire channel is used. The current system utilizes the spacecraft ephemeris to predict
the Doppler offset to the carrier frequency induced by the spacecraft. The offset is known
to a small error margin and can be tracked as long as the user spacecraft stays within the
TDRS spot beacon. The DAMA user will have an unknown Doppler offset at the start of
the service. We are presently investigating designs for the receiver structure ([3] and [4])
that will scan the frequency domain for DAMA user signals and then determine the
Doppler offset in real-time. With the real-time offset determined, the carrier and PN code
can be tracked and request packet demodulated and processed.
Conclusion
We have presented a concept for the development of a DAMA system for use with the
NASA Space Network. Innovative parts of this concept include using the S-Band Multiple
Access system on the Tracking and Data Relay Satellite as an orderwire channel for
service requests. To minimize the impact to the ground station beam forming hardware and
to provide hemispherical coverage, only one element of the phased array is used to acquire
the DAMA request. Advanced signal processing hardware in the ground station will be
utilized to determine the user Doppler offset to the carrier frequency in real-time.
Acknowledgment
This work was supported through NASA Grant NAG 5-1491 to New Mexico State
University.
References
ABSTRACT
This paper discusses the design and development of the EUVE Virtual Environment (EVE)
system. The EVE system is being developed as an interactive virtual reality (VR) viewing
tool for NASA’s Extreme Ultraviolet Explorer (EUVE) satellite. EVE will serve as a
predictive tool for forecasting spacecraft constraint violations and will provide a capability
for spacecraft problem analysis and resolution in realtime through visualization of the
problem components in the spacecraft.
EVE will animate, in three-dimensional realtime, the spacecraft dynamics and thermal
characteristics of the EUVE spacecraft. EVE will also display the field of view for the
science instrument detectors, star trackers, sun sensors, and both the omni and high-gain
antennas for NASA’s Tracking and Data Relay Satellite System (TDRSS) and for possible
ground station contact. EVE will display other vital spacecraft information to support the
routine operations of the EUVE spacecraft.
The EVE system will provide three quick-look visualization functions: (1) to model
in-orbit data for realtime spacecraft problem analysis and resolution, (2) to playback data
for post-pass data analysis, and training exercises, and (3) to simulate data in the science
planning process for optimum attitude determination and to predict spacecraft and thermal
constraint violations.
We present our preliminary design for a telemetry server, providing both realtime and post
pass data, that uses standard Unix utilities. We also present possibilities for future
integration of the EVE system with other software to automate the science planning and
command generation functions of the satellite operations.
1
Also Director, Laboratoire d’Astronomie Spatiale, Traverse du Syphon, BP 8, 13376,
Marseille, France Cedex 12.
Key Words: telemetry, virtual reality application, realtime, playback, planning, anomaly
analysis
OVERVIEW
The Center for Extreme Ultraviolet Astrophysics (CEA) at the University of California at
Berkeley operates the Extreme Ultraviolet Explorer (EUVE) Science Operations Center.
Because of budgetary constraints, CEA is drastically reducing the personnel required for
payload monitoring, analysis, and science planning and, thus, is seeking to implement
innovative software to improve the output of the reduced level of operations personnel.
EVE applies automation technologies to increase productivity in monitoring functions and
augments activities that require human intervention with tools that help operators and
planners visualize scenarios and create effective solutions. The goal of the project is to
create an interactive virtual environment tool for the EUVE spacecraft that will improve
satellite environment visualization and improve the operations team’s problem-solving and
planning abilities.
MOTIVATION
At the present time, EUVE does not have tools that assist an operator in visualizing the
state of the spacecraft. Operators and science planners often ask questions such as: What
is the spacecraft attitude relative to the sun-line? Where were the telescopes pointing when
they recorded this anomalous background? Why did the star tracker lose its guide-star at
this time? If this attitude is blocked by a moon crossing, how can I best avoid it and still do
my observation? How long is spacecraft daytime and what is the sun angle to this
instrument that is heating up? The data to answer all of these questions are produced
regularly in satellite operations, yet they remain in a cumbersome form not readily
available to the typical console operator. EVE, when fully developed, would display the
animation of EUVE’s dynamics, its thermal states, and the fields of view of the payload
detectors, spacecraft star trackers, antennas, and sun sensors.
The current science planning process consists of finding the optimal spacecraft attitude,
including boresight coordinates and roll, for the observation of a target. The inputs
comprise a time interval and the target attitude. A science plan is created using a
combination of visual checks and science planning software. The resulting plans are
checked twice for payload constraint violations. If a constraint violation is identified, a
different plan must be drafted. This trial and error process can be tedious and
unproductive. EVE can save time during the trial and error process by allowing the science
planner to interactively change the spacecraft roll in the virtual environment during the
planning process, thus resolving some of the problems otherwise caught at a later time by
the final checking software.
EVE can also check the selected roll angle for thermal violations at attitudes within normal
spacecraft constraint profiles (something the traditional science planning cannot do).
ORGANIZATIONAL DEVELOPMENT
The EVE project is being developed jointly by the Data Systems Technology Division
(Code 520) of NASA’s Goddard Space Flight Center (GSFC), the Intelligent Mechanisms
Group (Code 269-3) of NASA’s Ames Research Center (ARC), and the Center for
Extreme Ultraviolet Astrophysics of the University of California at Berkeley. EVE is
based on the re-use of the Virtual Environment Vehicle Interface (VEVI) developed by
ARC. Each collaborating group is responsible for specific development areas.
ARC: The ARC group is providing the VEVI software, which interfaces with the EUVE
data acquisition software and a commercial rendering software called WorldToolKit
(WTK) from Sense8 Corp. See EVE architecture in Figure 2. The VEVI software provides
a simple way to interface with the WTK renderer software.
CEA: CEA is developing a data server that will generate the necessary parameters
required to display the EUVE spacecraft profile. This data server includes the generation of
EUVE realtime and playback data. CEA will develop the graphical user interface (GUI) for
EVE and the thermal mapping of EUVE based on its historical thermal profile. CEA will
oversee the integration of the modules developed by all three groups and perform
acceptance tests. Upon successful acceptance testing, CEA will integrate EVE with the
current EUVE operations system.
DESIGN REQUIREMENTS
The EVE system was designed with the idea that the EUVE payload and spacecraft
experts and the science planners will use the same interface to visualize the spacecraft in
its environment with its changeable constraints. At the same time the tool must be flexible,
extensible, and, above all, low-cost while having a short development life-cycle. As such,
the EVE system must require the least effort for integration with the existing EUVE
operational environment.
FUNCTIONAL REQUIREMENTS
(1) Realtime: to model in-orbit data for realtime spacecraft problem analysis and resolution
(2) Playback: to playback data for post-pass data analysis and training exercises
(3) Planning: to simulate data in the science planning process for optimal attitude
determination and to predict spacecraft and thermal constraints violations
For each of these three functions, EVE will process EUVE data and represent them in a
three-dimensional view of the spacecraft dynamics and thermal characteristics of EUVE.
EVE will also display the fields of view for the science instrument detectors, star trackers,
sun-sensors, and both the omni and high-gain antennas for NASA’s Tracking and Data
Relay Satellite System (TDRSS) and for possible ground station contact. EVE will display
other vital spacecraft information to support the daily operations of EUVE.
Figure 1 shows our sample prototype of EVE’s graphical user interface (GUI). Window 1
shows a function menu for Playback, which allows users to select start and end times. In
this sample GUI, Window 2 shows the global observer’s view of EUVE in orbit with its
thermal profile. The thermal scaling map is shown by the color bar in window 3 on the
right. Window 4 shows the field of view of the high-gain antenna with Earth in its field of
view. Windows 5 and 6 show the fields of view of the EUVE science instrument detector 1
and 2, respectively. Window 7 has the EUVE source indexes identifying the targets in the
field of view of selected components on the spacecraft and the science instrument. The
numbers appearing in the background of each view port (windows) are the indexes of the
EUVE sources. Window 8 is for Unix system operations.
In the realtime scenario, EVE will process the realtime data as they are received and
display the EUVE satellite in orbit with its thermal profile and the viewports (field of view)
of EUVE components selected by the user. In the playback scenario, the user inputs the
desired begin and end time, and the observer viewpoint. EVE then re-processes EUVE
post-pass data and displays EUVE’s profile.
The full implementation of EVE will allow the user to enter the observer viewpoint
through the GUI and will include the positions of stars, Earth, Moon, and Sun with other
objects of interest in the global view window. The power of virtual reality (VR) allows the
viewpoint or the object to be manipulated, displaced, or rotated easily with the use of a
three dimensional motion sensor such as the Spaceball Technologies spaceball. The model
will be interactive, allowing the user to enable and disable any feature at any time.
“Viewports” will be selectable showing what the star-trackers, sun-sensors, telescopes, or
antennas can “see.” Positions of objects in the universe will be generated and updated by
the corresponding data acquisition module.
The user can move the satellite to view and understand the difficulties that certain
positions and attitudes might cause. A simulation run can help find difficulties that might
arise during a test or present understandable reasons for behavior being investigated. The
viewing tool’s simulations improve operational planning by avoiding or allowing greater
flexibility at problematic attitudes.
EVE will be used in the science planning regime as a tool for identifying all EUV sources
in the field of view of the science instrument detectors and then optimizing the EUVE
attitude. This tool would be especially useful for a science planner preparing for an
observation during a roll-constrained spacecraft attitude or with competing targets in the
spectrometer.
SYSTEM ARCHITECTURE
The rtdaq and pbdaq modules communicate with the telemetry server module running on a
separate computer and connected via the inetd Unix utility to receive data for processing.
See the EVE architecture on Figure 2.
rtdaq
This module processes realtime telemetry for realtime event analysis. When the telemetry
server running on a Sun computer receives realtime telemetry, it processes data into an
expected format and sends it to a designated port on the SGI workstation where EVE is
running. The rtdaq module is then invoked to process the telemetry and forward the output
stream to the EVEengine. The EVEengine module reads in the data and assembles the
appropriate rendering commands and feeds them to the WTKrenderer module for display.
pbdaq
This module takes the beginning and ending times provided by the user via the EVE GUI
and requests the data from the telemetry server running on a Sun computer for post pass
data analysis. The output data stream gets fed to the EVEengine and then to the
WTKrenderer module for graphical display.
pldaq
This module is the orbit propagation software that simulates where the planets and
satellites will be in the time frame given by the user via the GUL From the orbital
parameters and the user’s requested spacecraft attitude, this module will generate a data
stream in the same format as that of rtdaq and pbdaq. The output data stream is fed to the
EVEengine and then to the renderer for display. The following list summarizes the user’s
input parameters for the planning function:
WTKrenderer
This module is responsible for loading the object models (EUVE, TDRSS, Earth, Moon,
etc.) and the star catalog. It reads in the rendering commands from the EVEengine module
and ties them to the corresponding objects and animates the EUVE orbital profile.
The existing EUVE realtime telemetry server system consists of a variety of hardware and
software packages that work together, and thus has a high level of system dependency. In
an effort to make EVE a flexible system to integrate, we created a simple wrapper around
the existing telemetry server system to take requests from rtdaq and pbdaq modules and
produce the telemetry stream for the corresponding module’s functions. To facilitate a
“plug-and-play” integration of EVE to the existing operations system, we designed our
telemetry server hookup to use the standard Unix “inetd” utility. Figure 2 shows EVE data
acquisition modules running on an SGI workstation requesting data via inetd from the
telemetry server running on the Sun workstation.
The telemetry server wrapper, which generates realtime and playback EUVE data, is based
on re-using the existing EUVE operations software. The data server generates the
necessary EUVE data in a stream in the following ASCII format:
The first column is the major frame (time) in which the spacecraft and the payload sensor
values appear in the telemetry; the second column is the mnemonic of the sensors and the
third is the raw count value; the fourth contains the converted engineering value; and the
fifth contains the corresponding units for the values.
CONCLUSION/FUTURE PLAN
We are in the process of developing our prototype EVE system with only the playback
function, which includes the pbdaq, EVEengine and the WTKrenderer modules for
performance testing. Based on our prototype development, we have found that VEVI
provides an easy interface between the data server and the WTK rendering software.
However, the performance of the renderer process is significantly reduced if more than
two viewports are open. We have tested our prototype EVE on a more powerful SGI
Impact workstation with little difference in performance. We are analyzing the impact of
the rendering process to increase the performance.
The virtual interface can become an even more powerful tool when linked to the automated
scheduling and planning tools being incorporated into EUVE operations. The science
planner could interactively manipulate the satellite attitude in the VR environment during
the planning process for a planned target or special observation and search for the best
solution to spacecraft constraints. Once the solution is found the VR toolkit would export
its data to an automated processes that would generate a science plan, a schedule, and
table loads for adjusted communications in a scenario where independent ground stations
would be utilized. We are also looking into the possibility of VRML/Java implementation
of this EVE application, which could be useful in a remote problem analysis and resolution
scenario.
ACKNOWLEDGMENTS
We acknowledge the preparatory research done on the EVE project by Chris Smith, Pat
Ringrose, and Eoghan Casey. Their notes on the EVE functional specification were quite
useful in the development. We thank Allen Hopkins, Marty Eckert, and Paul McGown for
reviewing this paper. This work is supported by NASA contract NAS5-29298 and
NASA/Ames grant NCC2-902.
REFERENCES
Piguet, L., Fong, T., Hine, B., Hontalas, P., and Nygren, E. VEVI: A Virtual Reality Tool
For Robotic Planetary Exploration, NASA Ames Research Center, 1995.
Wong, L., Kronberg, F., Hopkins, A., Machi, F., and Eastham, P. “Development and
Deployment of a Rule-Based Expert System for Autonomous Satellite Monitoring,” SAG
#713, Proceedings of Fifth Annual Conference on Astronomical Data Analysis Software
and Systems, Tucson, AZ, October, 1995.
WorldToolKit Version 2.1 Reference Manual, SENSE8 Corporation, Mill Valley, 1995.
OVERCOMING THE CONSTRAINTS ON MODELING TELEMETRY
IN VR SYSTEMS
Navid Sabbaghi
Center for EUV Astrophysics, 2150 Kittredge Street,
University of California, Berkeley, CA 94720-5030, USA
ABSTRACT1
Virtual reality (VR) provides science operators with a time-saving tool for interpreting
telemetry data. Therefore, the Extreme Ultraviolet Explorer (EUVE) satellite operations
team decided to develop an inexpensive VR system to best address human operators’
needs. The EVE (EUVE Virtual Environment) Project developed a solution to the issue of
attaining maximum quality representation for minimal cost via a series of weighted
trade-offs that maximize return with minimal development costs. Quantification of realism,
methods of graphic representation, and hardware and software limitations are discussed.
1. INTRODUCTION
Satellite telemetry is represented in complex and often cryptic data streams. Therefore,
understanding and manipulating telemetry in different “what if” scenarios requires
significant effort to produce credible analysis. Since mission-critical situations require
science planners and operators promptly understand telemetry analysis, the stream of
numbers must be mapped to another representation that allows human operators to actually
visualize what is happening with the spacecraft and its environment. The Center for
Extreme Ultraviolet Astrophysics is addressing this problem in Extreme Ultraviolet
Explorer (EUVE) spacecraft operations with the EVE (EUVE Virtual Environment)
Project, which offers an interactive virtual reality (VR) viewing tool for EUVE satellite
telemetry. EVE serves as an efficient predictive tool for science planners in forecasting
spacecraft constraints and resolving anomalies; given the complete information a visual
system offers, science planners will make better informed decisions. VR allows satellite
operators full immersion in, and understanding of, what is happening and allows them to
process the situation faster and more accurately because more factors can be accounted for
simultaneously in a visual presentation versus a textual or numerical representation. VR
gives operators a greater degree of control for planning scenarios in realistic situations.
1
This paper assumes that the underlying hardware requires fewer computations to render
fouraud polygons than textured (mapped) polygons.
Use of VR for EUVE operations decreases the complexity of telemetry analysis and
science planning maneuvers with realistic simulations.
However, VR systems limit the realism of telemetry streams. The realtime intervals
between graphics updates, which are limited by the operating system and graphics
libraries, illustrate a VR system’s severe limits. For example, since many computer
systems do not operate in realtime, science planners using a VR system could make
decisions based on less than current satellite data, which could lead to less than optimal
decisions. Of course, many of the problems can be overcome with higher development
costs. This paper addresses the difficulties of modeling real telemetry using current VR
systems, drawing from examples in the EVE Project, and offers solutions to overcome the
hardware and software limitations while minimizing development costs. Development
costs are an issue because the purpose behind EVE is to lower operations costs for the
EUVE mission. Therefore, we adopted this policy for creating EVE too: we minimized
development costs to maximize the return. Questions regarding how “real” a VR system
should be and its level of sophistication are then posed. For example, decisions must be
made between using a glove and suit or a screen, and of what information to display using
VR. Such decisions dictate a VR project’s development costs.
VR environments for modeling telemetry should look, feel, and respond like the real
environment. The look of a VR system corresponds to the frame rate and the realism of the
graphical objects (amount of texture mapping in the virtual universe). The feel of such a
system is defined as how the user interacts with the system. A VR system responds
realistically if the telemetry the user sees is the true telemetry for that time and not
telemetry lagged by the operating system or graphic libraries. These benchmark definitions
do not include special input/output devices, as the benchmarks aim to minimize
development costs. Thus, a standard screen and mouse are assumed.
The look, feel, and response of a VR system can each be modified to make a system more
realistic, but the speed of the system usually suffers, thereby decreasing the realism. An
analysis of the qualities that compose the look, feel, and response of a VR environment
determines which combination of qualities will maximize the realism of the system.
Maximizing each of these elements, intertwined problems arise in modeling telemetry on a
VR system. The solution to the deadlock is to accept proper trade-offs and, for EVE’s
purpose, to make trade-off decisions that follow the design philosophy of minimizing
development costs to maximize return.
The look of a VR system corresponds to the frame rate and the realism of the graphical
objects, in EVE’s case on the monitor. In other words, the satellite should move smoothly
and if users are supposed to see a satellite antenna with white texture, they should see a
shape that approximates the antenna, requiring many polygons with textures applied to
each of them.
Two characteristics determine the realism of graphical objects: the number of polygons
that compose the object, and the amount of texture mapping on an object. For example, if
an object is supposed to be round, and polygons are the system’s object primitives, then
many polygons should be used to approximate the sphere. Similarly, if an object is
supposed to look like solar panels, the appropriate amount of texture mapping should be
applied to promote realism and allow adequate performance.
Four characteristics determine the frame rate and, thus, the rendering performance: the
number of windows open (corresponding to different viewports); the number of texture
maps in the VR world (images mapped onto a object’s surface [I]), the number of
polygons in the VR world; and the rendering speed of the machine running the VR
environment.
Given these facts, several methods exist to improve the look of the VR system. The most
apparent, yet costly, solution is to buy faster hardware. A more cost-effective approach
minimizes the number of texture maps, polygons, and open windows in the system.
However, minimizing the number of texture maps decreases the realism of the system. The
trade-off is to turn off texturing for distant objects since texture naturally diminishes with
distance; a user is not likely to notice the texture mapping onto an object due to aliasing if
it is very far away. Therefore, computational resources will not be wasted on rendering
“distant” texture maps.
Minimizing the number of polygons in the VR simulation also decreases the realism. Here
the trade-off replaces complex geometric objects, which have unneeded detail, with texture
maps; for a fixed number of polygons, p, it is computationally less costly to render a
certain texture map in place of the p polygons [3]. For example, in the EVE project instead
of modeling the details of a TDRSS satellite, TDRSS can be texture mapped onto a
triangle composed of four polygons.
Minimizing the number of open rendering windows becomes the only real problem. In this
case the application programmer must provide the user with an option that makes the
world seem real despite fewer viewports. Specifically, the program must give the user an
option to switch between views. How the design issue of switching between views is
resolved will determine the realism of this optimization.
Increasing the frame rate and realism of objects, the programmer of the VR system will
bring maximum reality to the look of the system with minimal development time and cost if
developers use the proper trade-offs.
How the user interacts with a VR system defines its “feel.” In EVE, we use a mouse as the
input device to lower cost of development and we allow the mouse different metaphorical
controls. The most commonly used metaphor (graphical icon) used in VR systems today is
the hand and forward/backward roll dial. Another commonly used metaphor is a control
panel for the six degrees of movement. Which is better?
The hand metaphor functions by positioning the hand icon on an object and pressing a
button, thereby grabbing the object. Moving the hand left, right, up, and down spins the
object as if the user had applied a torque in that direction. The unnatural part of this
metaphor is how to move forward and backward; this requires a separate icon, a rolling
dial which will move the object or viewpoint forward or backward. In conclusion, the hand
determines the orientation and the rolling dial moves the object or viewpoint forward or
backward.
The control panel metaphor seems more natural because its translations work in the
coordinate system people are accustomed to from high school algebra. But in fact, the
translations are more convenient for a robot than a science planner. A science planner will
want to spin the satellite to check thermal sensors on all sides of the satellite. It would be
very inconvenient for a science planner to have to go four units forward and fours units
left, in order to sweep around the satellite.
The promptness of environment updates in the VR simulation dictates the perceived reality
of the simulation. Usually the data fed to a renderer are lagged because of non-realtime
operating system constraints.
The best way to get near-realtime operating system performance without losing the
advantage of having a time-sharing system is to run a soft realtime operating system in
which “critical realtime tasks get priority over other tasks.”[4] In this way programmers
will have the flexibility of using multiple processes, in which case they can give the
telemetry server the maximum priority. Using a time-sharing system minimizes
development time because it is easier to create separate smaller modules or processes that
work as opposed to creating one large module that works.
We have discussed some of the general problems that arise when maximizing each of the
benchmarks for a VR system and examined the low cost solutions offered. The critical
issue when modeling telemetry in a VR environment is how to quickly update the visual
telemetry data for a satellite in response to a telemetry-rich realtime feed or to user
movement.
Decreasing processing overhead increases the speed of the system. Our definition of
maximizing a system’s look does exactly that; the benchmark’s goal is to increase the
frame rate.
For example, in the EVE project the only complex object that needed many polygons was
the actual satellite and its thermistors. Everything else could be a pixel (star map) or a few
polygons (stars, planets, TDRSS). The reasoning behind this choice is simple: the science
planner/operator only cares about the thermistor values of EUVE and what is in the EUVE
field of view. Having these minimum objects in the VR simulation and using the other
tricks such as texture mapping TDRSS onto a polygon instead of creating it from a set of
polygons increases the frame rate and, thus, increases the speed of the system.
The choice of operating system is important, because a soft realtime system should be
chosen such that when a realtime telemetry link is made the VR world is updated in
approximate realtime.
Because the two major versions of UNIX provide soft realtime support via priority
scheduling, this demand did not burden EVE. Giving the telemetry process the highest
possible priority, we funneled near-realtime support out of UNIX. The next highest priority
was assigned to the renderer, and every helper process was given a lower priority. In such
a way, EVE approximates realtime.
However, to implement exact realtime support the whole EVE package would have to be
ported to a realtime operating system such as VXWorks. The problem with strict realtime
systems, however, is that they do not support virtual memory and, therefore, the program
has limited capabilities unless the programmer takes strategic, costly programming
measures (namely, overlays).
So with the list of possible optimizations to increase the realism in a VR telemetry system,
the question in EVE is how much realism does a science planner/operator need?
Minimizing our costs, we addressed the science planners requirements with simple
hardware: a workstation, monitor, and mouse.
The graphics requirements for EVE are not too straining; the most polygon-intensive
object is the satellite. Other complex objects can be modeled with texture mapping [2].
Because the telemetry data are the most important information for a science
planner/operator, the idea is to model those objects onto which you wish to map the data.
Anything else is extra material that the user can be tricked into believing exists in the
virtual universe when in fact “system look” optimizations are being used.
5.2 REALISM IN TELEMETRY
Given that telemetry is the information that a science planner/operator wants to see
mapped onto the virtual universe, how should this data be mapped?
To represent thermal shading information in the EVE project, spherical thermistors are
placed on the EUVE virtual satellite. Science planners have the option of clicking on these
spherical virtual thermistors to get a bar in another window showing the current thermistor
value and how close it is to the limits. The planners/operators also have an option of
viewing a plane of such bars in a separate window, in an effort to allow them to “monitor”
all the thermistors at the same time (Fig. 1). This function is useful when pointing EUVE in
a new location and checking to make sure none of the thermistors have values outside the
safe limit. To aid planners, clicking on one of the thermistor bars in the plane of bars
highlights the location of the thermistor on the spacecraft.
Regarding detector anomalies and what the detectors see, EVE allows science
planners/operators to see different viewports at the same time. They can choose between
all four scanners and the individual detectors. Initially two viewports are opened because a
planner/operator doesn’t need to be overwhelmed with too much information, given the
option to open/close more viewports (Fig. 2.).
Fig. 2. Window with EVE.
6. CONCLUSION
To perfect a VR system the programmer needs to examine the look, feel, and behavior
requirements from the user’s perspective. Once each of these features is maximized and
the question of “how real should a VR system be?” is answered, the VR system can come
closer to reality. The system can then be built, checked with the user, and modified
depending on the user’s wishes. Applying this philosophy, EVE’s design meets the
minimum requirements to make the largest impact in reducing the effort required for
science planning with the EUVE satellite.
Thanks go to T. Morgan for his comments and support of the EVE Project. This work is
supported by NASA contract NAS5-29298. The Center for EUV Astrophysics is a
division of UC Berkeley’s Space Sciences Laboratory.
REFERENCES
RELATED READING
Abedini, A., Moriarta, J., Biroscak, D., Losik,L., and Malina, R.F., “A Low-Cost,
Autonomous, Unmanned Groundstation Operations Concept and Network Design for
EUVE and Other Earth-Orbiting Satellites,” International Telemetering Conference
Proceedings: Re-Engineering Telemetry, XXXI, 304-311, 1995.
Bajcsy, R. and Lieberman, L., “Active Perception,” Proceedings of the IEEE, 76(8), 1988,
996-1005.
Davis, E., “Representing and Acquiring Geographic Knowledge,” Pitman and Morgan
Kaufmann, Londom and San MatIntroductioneo, California, 1986.
Fahlman, S.E., “NETL: A System for Representing and Using Real-World Knowledge,”
MIT Press, Cambridge, MA, 1979.
Hubona, G., Perceptual and Cognitive Issues for the Design of 3D Visual Interfaces and
for Virtual Environment (VE) Technology
Lakoff, G. and Johnson, M., “Metaphors We Live By,” University of Chicago Press,
Chicago, IL., 1980.
Malina, R. F., “Autonomous Operation of the Science Payload on the Extreme Ultraviolet
Explorer,” presented at the International Astronautical Federation Symposium 1995, CEA
preprint #677, Center for EUV Astrophysics, UC Berkeley, 1995.
Penrose, R., “The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of
Physics,” Behavioral and Brain Sciences, 13(40), 1990, 643-654.
Tanenbaum, “Modern Operating Systems,” Prentice Hall, Princeton, New Jersey, 1994.
Stroustrup, B., “The Design and Evolution of C++,” Addison-Wesley, Reading, MA,
1994, p. 6.
THE NASA EUVE SATELLITE IN TRANSITION:
FROM STAFFED TO AUTONOMOUS SCIENCE
PAYLOAD OPERATIONS
ABSTRACT
The science payload for NASA’s Extreme Ultraviolet Explorer (EUVE) satellite is
controlled from the EUVE Science Operations Center (ESOC) at the Center for EUV
Astrophysics (CEA), University of California, Berkeley (UCB). The ESOC is in the
process of a transition from a single staffed shift to an autonomous, zero-shift, “lights out”
science payload operations scenario (a.k.a., 1:0). The purpose of the 1:0 transition is to
automate all of the remaining routine, daily, controller telemetry monitoring and associated
“shift” work. Building on the ESOC’s recent success moving from three-shift to one-shift
operations (completed in Feb 1995), the 1:0 transition will further reduce payload
operations costs and will be a “proof of concept” for future missions; it is also in line with
NASA’s goals of “cheaper, faster, better” operations and with its desire to out-source
missions like EUVE to academe and industry. This paper describes the 1:0 transition for
the EUVE science payload: the purpose, goals, and benefits; the relevant science payload
instrument health and safety considerations; the requirements for, and implementation of,
the multi-phased approach; a cost/benefit analysis; and the various lessons learned along
the way.
1. INTRODUCTION
In today’s environment of reduced NASA budgets additional pressures have been put on
existing satellite missions to reduce costs. Mission operations costs, by far, dominate all
others and have rightly become the prime target. In response to these budgetary pressures
and to NASA’s goals of “cheaper, better, faster,” the EUVE project at CEA has led the
way in changing the typical NASA mission operations paradigm. In Feb 1995 CEA
successfully implemented a transition from 24-hour, three-shift human operations to a
single-shift scenario (Malina 1994; Biroscak et al. 1995) for the EUVE science payload.
This transition was accomplished using augmented intelligence software (Lewis et al.
1995; Wong et al. 1995) and has been extremely successful.
CEA is now following up on this success with a transition from one- to zero-shift payload
operations. We have been able to follow this course of action for two main reasons: (1)
EUVE has successfully completed its prime mission and fulfilled the associated science
objectives; and (2) the flight hardware was designed with built-in, on-board safety
mechanisms that have proved very reliable on orbit. For these reasons we can accept
increased levels of risk in order to reduce operations costs, which will help to further
extend the mission’s lifetime. Through the so-called three-to-one (3:1) and, now, one-to-
zero (1:0) transitions we have significantly reduced operations costs, while at the same
time incurring no significant additional risks to the payload instruments or negative impacts
on the quality and quantity of the science return.
2. WHY 1:0?
A variety of factors contributed to the decision to further build upon the success of the 3:1
transition with a move to a zero-shift science payload operations environment. The
following subsections discuss these factors in detail.
The goal of the 1:0 transition is to automate or eliminate all of the routine controller “shift
work,” defined as those routine tasks that must be done on a specific time basis. Under
this definition shift work mainly comprises the routine, daily “on-line” controller activities:
the human monitoring of science payload telemetry during real-time contacts with the
satellite (i.e., console duty); the daily console support maintenance activities; and routine
payload commanding. In addition to this routine shift work, controllers are responsible for
a variety of non-routine “off-line” activities (e.g., anomaly response and special
commanding).
The maturity of the EUVE mission played a major role in allowing CEA to reduce
operations staffing. The satellite’s built-in safety mechanisms have always worked on-orbit
as designed, providing EUVE with the ability to autonomously safeguard itself in
anomalous situations. Although on-console controllers are good at detecting unexpected
failures and at providing instantaneous response, this type of emergency human response
to a payload anomaly is limited to realtime contacts with the satellite and has never
actually been necessary during four years of on-orbit operations. Moreover, we have
discovered that automated monitoring of telemetry can detect expected anomalies better
than on-console controllers.
EUVE has four levels of payload instrument health and safety (H&S) monitoring, each
with its own associated response timescale:
1. On-board safety monitors (e.g., detector high voltage shutdown in response to high
counting rates) provide instantaneous detection and response.
2. Engineering monitor out-of-limit conditions (e.g., instrument electric current readings)
are detected on the ground during realtime contacts or hours later via review of
production data (i.e., tape recorder dump).
3. Detector anomalies (e.g., distorted detector images) are detected hours after the fact
by human review of production data images.
4. Instrument degradation (e.g., detector gain sag) is detected over periods of weeks to
months via special tests and calibrations.
The 1:0 transition has little effect on the above response timescales, introducing a
significant increase (of up to a few days) only for discovering detector anomalies.
However, this delay is acceptable because of the low risk of such an occurrence (none so
far in four years of flight operations) and because the impact would not compromise
instrument safety, but would only result in a small degradation of the science return.
Based on these instrument H&S considerations, we are convinced that a zero-shift
operations scenario with 24-hour automated telemetry monitoring will maintain virtually
the same level of instrument H&S with only a slight and acceptable increase in risk, mainly
in the form of slower response to detector anomalies.
Before going into the details of the 1:0 transition implementation, the enormous success of
its predecessor is worth noting. The 3:1 transition changed the operations paradigm for the
way NASA will run missions in the future by demonstrating an innovative, low-cost, and
low-risk methodology. As NASA Administrator Dan Goldin (1994) stated, “…[EUVE]
employed artificial intelligence and expert systems and, instead of having three shifts
operating the spacecraft, with people going around the clock, [they] changed the
paradigm… Less is more, and we’re getting more reliable operations of the spacecraft with
one shift instead of three.”
The 3:1 transition, which has been extremely successful, has been described in previous
publications (Malina 1994; Lewis et al. 1995; Biroscak et al.1995; Kronberg et al. 1995;
Wong et al. 1995). The implementation of an artificial intelligence (AI) telemetry
monitoring system, which uses two major components, formed the basis of the transition.
The first component, “Eworks,” is an EUVE-specific adaptation of a commercial AI
telemetry monitoring package called RTworks from Talarian, Corp. The second, “sepage,”
is a UNIX-based paging utility, invoked by Eworks, that generates pages to an on-call
Anomaly Coordinator for EUVE (an ACE). This software system has performed
exceptionally well.
3. HOW 1:0?
The 1:0 transition was implemented in a phased approach based on an established set of
requirements and ground rules. The following sections describe the implementation in
detail.
The first item of business for 1:0 was establishing the following relevant requirements: (1)
science payload instrument H&S will not be compromised; (2) all console telemetry
monitoring duty will be human independent; (3) all routine console support duties will be
automated or, where appropriate, eliminated; and (4) information exchange in the ESOC
will be centralized, simplified, and effective.
In order to meet the above requirements, a number of ground rules were established. First
and foremost, instrument H&S will remain the primary focus of science payload operations
in the ESOC. Second, CEA is willing to sacrifice some science data return in order to
reduce costs. Third, the “20-hour rule” will remain in effect; this rule states that all
detector high voltage units will be turned off in the event that CEA is unable to receive
payload H&S status information within any contiguous 20 hour period (i.e., we completely
trust the payload to be on its own for up to 20 consecutive hours). Fourth, the on-call ACE
system will remain in effect.
After outlining the above requirements and ground rules we began the planning, design,
and implementation efforts for the 1:0 transition.
Upon implementation of this phased approach the ESOC will be operating in a full zero-
shift, “lights out” scenario, with all routine duties handled autonomously.
When all of the “required” tasks for the transition were organized and listed, it was clear
that CEA had neither the time nor the resources to accomplish them all. This lack of time
and resources forced us to reexamine our expectations, and to categorize and prioritize the
listed tasks into primary and secondary needs and wants according to the following
definitions:
• Primary Need—An issue that, if not addressed, could have negative effects on
instrument H&S.
• Secondary Need—A non-H&S issue that, if addressed, would yield significant
operations cost savings.
• Primary Want—A non-H&S issue that, if addressed, would yield minor cost savings
and/or science benefits.
• Secondary Want—A non-H&S issue of a purely streamlining nature.
The 1:0 Phase 1 transition aimed to automate all essential payload telemetry monitoring,
thereby eliminating human console duty. The development costs for this Phase were
relatively minor, and the transition was completed on 15 Nov 1995. The major
accomplishment eliminated “bad data” in the monitoring process, which is caused by a
lack of quality information in the telemetry files. We had to change Eworks from
monitoring realtime data (which do not include quality information) to post-pass realtime
data (which do); this change was accomplished in cooperation with the Packet Processor
(PACOR) facility at GSFC that packages and sends EUVE data to CEA. During normal
one-shift operations the controllers had continued to monitor payload telemetry manually
for 2–3 realtime passes each day; during this time Eworks detected all the anomalies that
the controllers detected, giving us additional confidence in the Eworks implementation,
and serving as a simulation for the 1:0 Phase 1 transition.
The goal of the 1:0 Phase 1.5 transition was to eliminate ESOC staffing on weekends and
holidays as a first step toward the fully operational zero-shift scenario. This step provided
an intermediate near-term goal to push the development efforts in order to complete many
of the 1:0 tasks in advance of any EUVE out-sourcing implementation activities. It also
allowed for an incremental reduction in weekend and holiday shift support, and provided a
time period to begin early validation and simulation of software and personnel activities in
preparation for the Phase 2 transition. The development costs for this transition were
moderate, and the transition occurred on 26 Jan 1996. Some of the major tasks
accomplished for this transition were: the implementation of an automated system to
monitor a minimal set of critical ground system equipment, the automation of the daily
manual review of engineering limit transitions and alert conditions, and a revision of the
internal and external communications flow process. We conducted a simulation over the
winter holiday period in which controllers “staffed” the ESOC via remote computer access
on weekends and holidays, checking in on the various systems at least twice each day to
ensure everything was operating as expected.
The goal for the 1:0 Phase 2 transition is to automate all daily console support duties.
When completed in late 1996, this transition will completely eliminate the need for all
routine daily console duty and support activities during normal operations. The
development for this transition includes such tasks as supplementing the Eworks rule base
to monitor additional engineering monitors, enhancing the existing automated ground
system monitoring utilities to incorporate additional capabilities, providing for user-
configurable control of the paging process, upgrading and reconfiguring the ground
systems for enhanced reliability, automating all routine payload commanding activities,
updating and augmenting all ESOC operations procedures, developing a training program
to ensure controllers retain operator proficiency with payload activities in a reduced
support environment, and ensuring an adequate simulation period.
In Jan 1996, in order to quantify the fiscal effects of the 3:1 and 1:0 transitions, we
performed a cost/benefit analysis for both. We calculated costs (actual and projected) by
including all development costs for the commercial software (e.g., licenses and
maintenance) and for the CEA personnel (programmers, systems, engineers, scientists, and
managers) who designed, developed, and integrated the new software into the existing
environment. By comparing the post-transition monthly operations costs with those pre-
transition, we were then able to calculate a fixed monthly cost savings rate that will be
sustained throughout the remainder of the mission. We estimated a break-even date for
each transition by comparing development costs with the accumulated monthly cost
savings. We then projected the monthly cost savings from the break-even date through the
remainder of the mission (currently slated through Sep 1997) to yield total estimated
savings.
The results were very encouraging, with estimated savings of ~$581,000 for 3:1 (see
Figure 1) and ~$25,200 for 1:0 (see Figure 2), a sum total of over ~$606,000 when carried
through Sep 1997. Although some of the 1:0 savings will be lost because of the delay in
the Phase 2 transition (originally scheduled for mid-1996), this loss will very likely be
compensated for if the outsourced EUVE mission continues through Sep 1999 as proposed
by CEA.
5. LESSONS LEARNED
Even though we have not yet completed the 1:0 Phase 2 transition, we have learned
numerous lessons from what has been accomplished to date, as outlined in the following
paragraphs.
Communications — Requiring that users and developers work closely together to ensure a
smooth transition minimized internal CEA communications problems related to the 1:0
transition. In addition, consolidation of all CEA EUVE operations functions (payload,
science planning, and guest observer/investigator support) into a single
integrated/intelligent science operations group under one manager helped to smooth the
communications flow. In terms of external communications, we worked directly with
GSFC personnel to define and implement an agreement for communications during off-
shift hours. We also implemented a weekly operations telecon between CEA and GSFC
personnel to ensure that all parties were aware of on-going issues, problems, etc.
Paging — A lesson learned during 3:1 and applied in 1:0 handles “bad data” (i.e., realtime
data with no quality information included) by switching Eworks monitoring to post-pass
realtime data (with quality information). This change required agreements and work on the
part of both CEA and the PACOR facility at GSFC. Future missions are advised to always
Figure 1: EUVE science payload cost/benefit analysis for the 3:1 transition.
Figure 2: EUVE science payload cost/benefit analysis for the 1:0 transition.
include quality information (e.g., CRCs) in the data used for monitoring. We also learned
the usefulness of categorizing anomalies into warnings (those requiring immediate
attention) and alerts (those requiring attention during next business day) and paging on
each in different ways. Finally, because some expected conditions often generated pages
(e.g., certain non-routine observation configurations), it became clear that paging should be
user configurable for easy adaptation to varying circumstances.
Ground computer and software systems — Being predominantly concerned about on-
board flight anomalies, we underestimated the impact of ground system and software
anomalies, which in practice make up the bulk of the anomalous situations that require
human intervention. Focusing more from the beginning on these types of anomalies would
have yielded a much more robust automated system.
Training — With reduced ESOC staffing one ends up with a shrinking pool of trained
controllers and ACEs. Not working in the ESOC on a daily basis makes one rusty when
duty calls. This is a difficult problem to solve, and we are exploring a number of ways to
deal with it: increased detail in the operations procedures, periodic simulations or re-
certifications, training “games” that pit controller vs. controller, etc. Because of the cost
inherent in maintaining a high training level, we must be careful to strike a balance
between what we need and what we can reasonably provide.
Resistance to change — As with the 3:1 transition, resistance to change was a factor. CEA
payload controllers have traditionally been trained to ensure maximal instrument H&S and
science return. This has resulted in a group of very caring and dedicated, yet conservative
(by design) personnel. Changing this mind set has sometimes proved difficult. The 1:0
transition has resulted in changing job responsibilities and in fewer highly trained
individuals. The key lesson has been to effectively discuss and explain the changes with
the affected individuals in order to gain from the outset their understanding, involvement,
and support.
Plan early — As evident from the cost savings presented above, the 3:1 and 1:0 transitions
have been very successful; one can only imagine what the savings would have been had
the EUVE mission been designed from the beginning with the idea of zero-shift operations
instead of requiring major re-engineering efforts along the way. Future missions can save
significant expenses by planning from the beginning for reduced staffing.
6. SUMMARY
By Jan 1996 CEA was operating the EUVE science payload in a semi-operational zero-
shift “lights out” scenario, one in which all payload telemetry monitoring is performed by
AI software, and the ESOC is not staffed on weekends and holidays; the transition to the
full Phase 2, operational zero-shift scenario, is scheduled for late 1996. We implemented
the 1:0 transition in a phased approach, doing the simple things first in order to gain
tangible results quickly and to build an overall robust system in an organized manner; other
papers in these proceedings (Eckert, 1996; Kronberg, 1996) discuss particular issues
related to this transition. The results have been extremely positive, with cost savings for
3:1 and 1:0 projected at $600,000 (and probably much more depending on the length of
the out-sourced EUVE mission).
First and foremost the authors would like to thank the entire EUVE operations team for its
assistance and support in the 1:0 transition efforts. At CEA we would particularly like to
thank all of the ESOC controllers and aides, the operations programmers and systems
support personnel, and the duty scientists. At GSFC we thank Ron Oliversen; Kevin
Hartnett, Hun Tann, and Bill Guit and his EUVE Explorer Platform Flight Operations
Team; Darnell Tabb, and Don Davenport and his crew at PACOR; and the relevant
personnel at NASA Communications (NASCOM) and in the Flight Dynamics Facility.
Second, we’d like to thank Mel Montemerlo of NASA Code X, Dave Korsmeyer of
NASA Ames Research Center, and Peter Hughes of GSFC for their support of the EUVE
technology testbed. Third, we’d like to thank the EUVE Program Manager Dr. Guenter
Riegler for being a strong “friend of change” within NASA. Finally, this work has been
supported by NASA contract NAS5-29298 and NASA Ames grant NCC2-838.
REFERENCES
Biroscak, D., Losik, L., and Malina, R.F. “Re-Engineering EUVE Telemetry Monitoring
Operations: A Management Perspective and Lessons Learned from a Successful Real-
World Implementation,” Re-Engineering Telemetry, 1995, 388–395
Eckert, M., et al. “EUVE Telemetry Processing and Filtering for Autonomous Satellite
Instrument Monitoring,” these proceedings
Goldin, D. from a speech entitled “The Next Frontier,” Commonwealth Club of California,
19 Aug. 1994
Lewis, M., et al. “Lessons Learned from the Introduction of Autonomous Monitoring to
the EUVE Science Operations Center,” 1995 Goddard Conference of Space Applications
of Artificial Intelligence and Emerging Information Technologies,” NASA CP-3296,
GSFC, Greenbelt, MI, 1995, 229–235
Kronberg, F., et al. “Re-engineering the EUVE Payload Operations Information Flow
Process to support Autonomous Monitoring of Spacecraft Telemetry,” Re-Engineering
Telemetry,” 286–294, 1995
Kronberg, F., et al. “Document Retrieval Triggered by Spacecraft Anomaly: Using the
Kolodner Case-Based Reasoning (CBR) Paradigm for a Fault-Induced Response System,”
these proceedings
Wong, L., et al. “Development and Deployment of a Rule-Based Expert System for
Autonomous Satellite Monitoring”, Proceedings of the Fifth Annual Conference on
Astronomical Data Analysis Software and Systems, Oct. 22–25, 1995, Tucson, AZ
USE OF XRUNNER FOR AUTOMATION
ABSTRACT
KEYWORDS
INTRODUCTION
All institutions that fly satellites, commercial and government alike, face significant
competitive pressures to reduce cost in all lifecycle phases. Particular attention has been
focused on development and operations costs, areas that drive most of the cost of a space
mission’s ground support component. At NASA's Goddard Space Flight Center (GSFC),
the Mission Operations and Data Systems Directorate (MO&DSD) builds and operates
ground systems. Faced with these competitive pressures, MO&DSD sought to reengineer
its business processes and thus initiated the RENAISSANCE project. At its inception in
1993, RENAISSANCE focused on ground data system development, with the goal of
building an operational ground system in less than 1 year, for less than $5 million. Initial
studies by the RENAISSANCE team led to a first generation architecture based on
reusable building blocks, garnered from GSFC's legacy systems where possible and built
to be reusable (Stottlemyer et al., 1993). Shortly thereafter, NASA Director Goldin's
exhortation to "faster, better, cheaper" was taken to imply far more substantial changes.
The RENAISSANCE team responded with a second generation architecture that allows
for extensive use of COTS hardware and software (Stottlemyer et al., 1996).
Indeed, in recent years, commercial off-the-shelf (COTS) hardware and software for
satellite applications has evolved considerably. COTS tools now surpass the functionality
of many custom-built systems and system components. The Eagle testbed, an outgrowth of
the CIGSS (CSC Integrated Ground Support System) COTS and legacy system integration
project of Computer Sciences Corporation (CSC) provides the experience base for CSC's
COTS integration work (Werking and Kulp, 1993; Pendley et al., June 1994). Several
other testbed projects, including the United States Air Force's (USAF) Center for Research
Support (CERES) (Montfort, 1995), the International Maritime Satellite (INMARSAT)
consortium, and the USAF Phillips Laboratory (Crowley, 1995) have produced successful
prototypes using COTS components. The Extreme Ultraviolet Explorer (EUVE) Science
Operations Center (SOC) at the University of California at Berkeley (Malina, 1994) has
adapted a COTS-based system to automate science instrument operations, resulting in
significant cost reductions.
This last project, the automated SOC at Berkeley, points to the other main element of
ground support cost: operations. The effort to reduce the cost of flying satellites must
necessarily address operations, an activity that endures as long as the mission. One way to
reduce the cost of operations is to automate them, replacing some human activities with
computer-based substitutes. For example, the technique of monitoring by exception means
that most data are evaluated within the system; the attention of the human operator is
demanded only when the behavior of some quantity is outside its appropriate bounds.
Many of the above cited efforts, including our own IMACCS prototype (Bracken et al.,
1995; Klein et al., 1996; Scheidker et al., 1996), have explored the potential of their
COTS based systems for automated operations. In most cases, the first target is on-line
operations—that is, those operations driven by real time telemetry (and in IMACCS,
tracking) data, requiring constant data monitoring and decisions for action made or assisted
by the ground data system. However, satellite operations also involve so-called off-line
operations, such as orbit and attitude determination, scheduling, data trending, and data
archive. These operational activities are usually manually intensive, requiring human
attention, which leads to expanded staff, often with specialized skills.
In this paper, we focus our attention on these off-line operations, specifically those
associated with orbit determination, orbit propagation, and orbit product generation in the
IMACCS prototype. In this prototype, reviewed in the next section, for orbit computations
we utilize Satellite Tool Kit (STK), made by Analytical Graphics, Inc. (AGI), and two of
its add-on tools: the Precision Orbit Determination System (PODS), made by Storm
Integration, and Chains, made by AGI. In addition, for vector transformations and attitude
computation, we utilize MATLAB, made by the Mathworks, Inc. Because these tools are
highly interactive, they are by their nature manually intensive and all lack any native
scripting capability. Considering the general problem to be finding a way to simplify and
automate the operation of these COTS tools, we recognized that a UNIX-based,
Xwindows-compatible test tool with record-replay capability would permit the simplified
operation of STK and MATLAB by recording and replaying the sequences of keystrokes
and mouse clicks needed to use them. Adding scripts from the UNIX shell PERL future
permitted these scripts to be executed without human intervention. This automation is
described below in the section following the review of the IMACCS project.
In 1995 CSC proposed that NASA Goddard’s RENAISSANCE team build a COTS-based
prototype to demonstrate that significant cost reductions were possible. The Integrated
Monitoring, Analysis, and Control COTS System (IMACCS), had the following goals:
integrate a set of COTS tools, connect them to live tracking and telemetry data, and
reproduce the functions of an operational ground system (Bracken et al., 1995). The target
mission for IMACCS was the Solar, Anomalous, and Magnetospheric Particle Explorer
(SAMPEX) mission, one of the spacecraft in GSFC's Small Explorer (SMEX) series.
SAMPEX is a low earth orbiting satellite in its fourth year of operational support.
IMACCS was designed to replicate the current real time command and telemetry flight and
off-line support for SAMPEX. A secondary goal of this prototyping project was to explore
its potential for operations automation.
A simplified block diagram of IMACCS is shown in Figure 1. The COTS hardware and
software have capabilities that exceed SAMPEX operations requirements. One tool, the
Altair Mission Control System (AMCS), used in IMACCS for command and telemetry,
shows substantial promise for automating data monitoring and commanding. CSC, through
its Eagle testbed had prior experience with the AMCS and was familiar with its capacity to
perform automated operational support. The AMCS provides automation through finite
state modeling and state transitions (Wheal, 1993). State modeling and state transitions
proved to be easy to implement, and a set of initial state models was built. Other features
and capabilities of the IMACCS prototype are detailed in Bracken et al. (1995). The use
of the AMCS’s finite state modeling feature to script passes is described in Klein et al.
(1996).
NASCOM Command
Load Command
Processor Management
System
• Frame Sync. • H&S Monitor
• CCSDS proc. • Control
LTIS 550 AMCS
Tracking Telemetry • Trending
Aquisition Data Data
Data BBN Probe
Flight Dynamics
Legend:
Orbit Attitude
RDProc MatLab Legacy
STK
PODS COTS
Figure 2 details the off-line functions of interest here and shows the interaction between
XRunner and STK and MATLAB. All of these components are UNIX processes that run
on distributed systems. As the figure shows, IMACCS off-line flight dynamics functions
include both orbit and attitude computations and products. Orbit computations are driven
by tracking data, which for SAMPEX are largely range rate data with some range data.
These data come into the system from NASCOM, and are processed by RDProc to put
them in time order, remove duplicates, and format the data to be read by the STK tools.
The orbit is determined and propagated by PODS, yielding ephemeris files for downstream
use. Chains predicts orbit events of interest, which for SAMPEX are station visibilities for
contacts, equator crossing times, solar illumination periods, and electron contamination
region transit times. MatLab computes and formats the two orbit-derived data products
required for mission support: the extended precision orbit vector uplinked to the satellite’s
onboard processor, and acquisition data (time and a vector) for ground stations to use in
acquiring the satellite. MatLab also computes the spacecraft attitude and attitude sensor
calibration values, based on telemetry from the AMCS and the ephemeris, although this
process is not currently automated.
Figure 2 also depicts the role of XRunner in the automation prototype. In the prototype
reported here, XRunner was used to record executions of operational procedures involving
PODS, Chains, and MatLab, and then replay these keystrokes, mouse moves, and mouse
clicks both to simplify manual operations and as part of automated sequences.
Ephemerides
Schedule
XRunner (to real time
system)
Attitude
Telemetry Data MATLAB
• Compute attitude Uplink Orbit
• Calibrate attitude Vector
• Compute orbit
Reports vector for uplink
• Compute acqui-
sition data Acquisition Data
for Ground
Stations
For recording user interactions, XRunner uses its Test Script Language (TSL), in which
the sequences of keystrokes, mouse clicks, and so forth are recorded. TSL scripts can be
combined with scripts written in Practical Extraction and Report Language (PERL), a
UNIX scripting language, for input or output, user setups, and decision branches. The
IMACCS team recorded the execution of STK, PODS, Chains, and MatLab routines in
TSL scripts and created the necessary PERL scripts to automate SAMPEX orbit-related
off-line operational procedures. Two different test cases are examined here. In Case 1, the
routine production of a pass schedule was simplified by creating the appropriate TSL and
PERL scripts that could be triggered simply by the system operator. In Case 2, routine
orbit determination and orbit production were automated so that these activities took place
with no human intervention unless an anomaly was detected. Both of these cases are
discussed in greater detail in the following subsections.
Case 1: Automated 3-Day Pass Schedule In the IMACCS system, the pass schedule is
computed with STK, based on an orbit determined and propagated by PODS. The
schedule must then be copied via the UNIX shell to a directory from which the AMCS (the
tool used in IMACCS for on-line functions) can read and display it. Performed manually,
the generation of a 3-day schedule requires 30 steps of text key-in and mouse clicks. The
IMACCS team used XRunner with its context sensitive option to record step-by-step
execution of STK. XRunner generated a TSL script and Xwindow images for STK steps
so that the replay would be the same as the original execution. The team also optimized the
window display response time to avoid a window timing-out. They used a PERL script to
copy this schedule, when it was created, into the appropriate directory for on-line use.
Creating this set of scripts reduced the operator basic interaction from thirty steps to a
single step. Steps in the procedure requiring data input, e.g. schedule start and stop times,
required the same level of operator input, although the scripts prompted the operator on a
context sensitive basis.
A simple extension of this capability was to create a button on the real time display for the
operator to click in order to produce a hardcopy of the current pass schedule. This button
was implemented on the AMCS display. When clicked with the mouse, the button caused
XRunner TSL and PERL scripts to execute and produce a hardcopy schedule for analysis
or other purposes. These scripts were similar, but not identical, to those for the 3-day
schedule production. The chief difference is that transfer to the AMCS is not need for this
operational procedure. The same XRunner recorded STK script was used for this
extension except that a mouse click print statement was added to print the satellite
schedule. This extension also shows the reusability of XRunner recorded scriptss with
minimum change.
The automated procedure is shown in Figure 3. Cron enables the user to schedule times for
the automatic execution of a process. For SAMPEX, orbit determination and product
Run MATLAB to
generate uplink orbit
Get current time and vectors and
a priori vector for acquisition data
orbit determination
Run STK/Chains
to predict orbit
Set up STK files events and
generate
schedule
Pass
After orbit determination, the XRunner script returns control to PERL for quality
assurance (QA). If the orbit solution passes QA, another XRunner script is invoked to
control ephemeris propagation and product generation with STK/Chains. MatLab is then
called to format the EPVs and acquisition data. If the orbit solution passes QA, the entire
scenario is accomplished without human intervention, except for the initial preparation of
cron tables. Should the orbit solution fail QA, the PERL executive invokes another script
to notify the appropriate engineer.
CONCLUSION
The two cases described here demonstrate that automated test tools, created for testing
UNIX, Xwindows programs have considerable potential for the automation of spacecraft
ground support. One feature of these tools, keystroke and mouse-click record and replay,
was used extensively in the prototypes described here. While we have applied our test tool
and UNIX scripts to off-line operational procedures, they can also be used for on-line
operations. In either regime, we suggest that such automation can, at a minimum, reduce
operator effort, but also can enable some operational procedures to be executed with no
human intervention at all. Reducing the role of operators for repeated spacecraft operations
is essential for reducing spaceflight costs.
The test tools under consideration here have another feature that we intend to explore in
future work. The tools can compare some or all of the contents of a display window to a
pre-recorded reference. This ability to compare a set of current values with a standard
means that QA for operational procedures can also be automated. In addition, it means that
context-sensitive discrepancies can be noted as triggers for human intervention or for
applied intelligence (AI) agents to be activated. Future work by the IMACCS team will
include exploration of these capabilities.
REFERENCES
Bracken, M. A., Hoge, S. L., Sary, C. W., Rashkin, R. M., Pendley, R. D., & Werking, R.
D., “IMACCS: An Operational, COTS-Based Ground Support System Proof-of-Concept
Project,” 1st International Symposium on Reducing the Cost of Spacecraft Ground
Systems and Operations, Rutherford Appleton Laboratory, Chilton, Oxfordshire, U. K. -
September, 1995.
Crowley, N., “Multimission Advanced Ground Intelligent Control Program,” National
Security Industrial Association Symposium, Sunnyvale, CA - August, 1995.
Klein, J. R. et al., “State Modeling and Pass Automation in Spacecraft Control,” 4th
International Symposium on Space Mission Operations, Munich, Germany - September,
1996.
Montfort, R., “Center for Research Support, A New Acquisition Management Philosophy
for COTS-Based TT&C Systems,” National Security Industrial Association Symposium,
Sunnyvale, CA - August, 1995.
Pendley, R. D., Scheidker, E. J., & Werking, R. D., “An Integrated Satellite Ground
Support System,” 4th Annual CSC Technology Conference. Atlanta, GA - June, 1994.
Pendley, R. D., Scheidker, E. J., Levitt, D. S., Myers, C. R., & Werking, R. D.,
“Integration of a Satellite Ground Support System Based on Analysis of the Satellite
Ground Support Domain,” 3rd International Symposium on Space Mission Operations and
Ground Data Systems. Greenbelt, MD - November, 1994.
Scheidker, E. J., Pendley, R. D., Rashkin, R. M., Werking, R. D., Cruse, B. G., &
Bracken, M. A., “IMACCS: A Progress Report on NASA/GSFC's COTS-Based Ground
Data Systems, and Their Extension into New Domains,” 4th International Symposium on
Space Mission Operations, Munich, Germany - September, 1996.
Stottlemyer, A. R., Jaworski, A., & Costa, S. R., “New Approaches to NASA Ground
Data Systems,” Proceedings of the 44th International Astronautical Congress, Q.4.404.
Graz, Austria - October, 1993.
Eugene L. Law
543200E
NAWCWPNS
Point Mugu, CA 93042
ABSTRACT
This paper will describe a test technique developed to measure the delays caused by the
use of asynchronous multiplexers/demultiplexers. These devices are used for both signal
transmission (microwave and fiber optic) and signal recording (especially helical scan
digital recorders). The test technique involves the generation and decoding of
asynchronous telemetry signals. The bit rates of the telemetry signals are variable.
Relative time is embedded in the telemetry signal as a 32-bit data word. The paper will
also present measured delays for two multiplexers/demultiplexers for different
combinations of bit rates.
KEY WORDS
BACKGROUND
Digital telecommunications systems and digital data recorders are widely used for
aeronautical telemetry applications. An asynchronous multiplexer (MUX) typically is used
to multiplex several independent telemetry signals into the digital telecommunications
system or helical scan digital data recorder. A corresponding demultiplexer (DEMUX) is
used to recover the telemetry signals. The telemetry signals might include missile sensor
data and target sensor data that must be “fused” together to properly analyze what
happened during the missile’s encounter with the target. Typically, the data sources do not
include embedded time. Therefore, the timing of the events can only be reconstructed if the
delays of both signals are known quite accurately (this problem is eliminated if the delays
of both signals are the same). Unfortunately, the time delay introduced by the
asynchronous multiplexers/demultiplexers may not be the same for all telemetry signals.
Therefore, a specialized test set was developed to measure the delays introduced by these
devices. The test set described below is not the only method for measuring these delays.
The tests could be performed with pseudo-random generators plus pattern detectors at the
input and output of the device under test. The delay is the time difference between the
outputs of the detectors.
The test set consists of a transmitter and a receiver. The transmitter accepts four external
clock signals and generates four independent randomized non-return-to-zero level
(RNRZ-L) signals with zero degree clocks. The maximum bit rate is 10 Mbps. The data
signals are structured as telemetry minor frames. Each minor frame includes a 32-bit sync
word, a 32-bit time word, an 8-bit subframe sync word and some fill bits. The major frame
consists of 256 minor frames. The time data is generated by clocking a 32-bit counter at a
1 MHz rate. Every minor frame, the value of this counter is latched and sent out as the
32-bit time word. This method was chosen for simplicity of implementation, however,
Global Position System (GPS) time or any other time source could be used. The functions
that are common to the four channels are implemented in four identical programmable gate
array devices.
The receiver is synchronized to the transmitter by pushing a button to reset the 32-bit
counters in both the transmitter and the receiver. The receiver uses the transmitters 1 MHz
clock to clock its 32-bit counter. The receiver accepts RNRZ-L data and clock from the
device under test. The data is derandomized and the frame synchronization words are
detected. The counter value is latched at frame synchronization detection. The counter
value contained in this minor frame is subtracted from the current counter value to give the
time difference in microseconds. This value is displayed on a light emitting diode (LED)
display. The display can be updated every minor frame, every major frame, or held. The
functions that are common to the four channels are implemented in eight programmable
gate array devices.
The delays of the bit synchronizers used to convert the analog output of the telemetry
receiver to a digital signal and clock were also measured. A newer, mostly digital bit
synchronizer added a delay of 32 bits while an older, mostly analog bit synchronizer added
a delay of 5 bits. These delays were independent of bit rate. Both delays are very small
compared to the delays introduced by the asynchronous MUX/DEMUX.
TEST RESULTS FOR HELICAL SCAN DIGITAL RECORDER MUX/DEMUX
CONCLUSIONS
The time delays introduced by the DS2 asynchronous MUX/DEMUX are a function of the
input bit rate and the percentage of the MUX/DEMUX capacity that is being used. The
delay is 650 ms at a bit rate of 44 kbps and less than 5 ms at a bit rate of 5 Mbps. The
variability in delays makes it very difficult to precisely time align signals from different
sources. This problem is especially difficult if the data rates are significantly different.
The time delays introduced by the asynchronous MUX/DEMUX used with helical scan
digital recorders are a function of the largest bit rate setting of any of the multiplexer
channels. The delays are the shortest with a maximum setting of 10 Mbps and generally
increase as the maximum MUX setting is decreased. The delays of the four individual
channels were the same for the conditions tested thus far. Therefore, the time alignment of
the signals was not changed by this MUX/DEMUX.
ACKNOWLEDGMENTS
The test set was designed and fabricated by Albert Gabaldon, Jeff Hutmacher, and George
Mills of the Weapons Instrumentation Division, NAWCWPNS, China Lake, CA. This
effort was funded by the member organizations of the Telemetry Group of the Range
Commanders Council.
8PSK SIGNALING
OVER
NON-LINEAR SATELLITE CHANNELS
Rubén Caballero
Center for Space Telemetry and Telecommunications Systems
The Klipsch School of Electrical and Computer Engineering
New Mexico State University
Las Cruces, NM 88003-001
ABSTRACT
KEY WORDS
INTRODUCTION
During its 12th annual meeting (November 1992 in Australia), the Space Frequency
Coordination Group (SFCG-12) requested the Consultative Committee for Space Data
Systems (CCSDS) Radio Frequency (RF) and Modulation Subpanel to study and compare
various modulation schemes with respect to the bandwidth needed, power efficiency,
spurious emissions and interference susceptibility.
An End-to-End system performance, including the InterSymbol Interference (ISI) and the
Symbol Error Rate (SER) as a function of Es/No on 8PSK modulation was conducted on
SPW software installed on a SUN Sparc Station 10 and a HP Model 715/100 Unix Station
at NMSU. The simulations were based on the Non-Return-to-Zero Logic (NRZ-L) data
format. The end-to-end system evaluation was performed using ideal and non-ideal data
with ideal system components and three baseband filter types: 5th Order Butterworth, 3rd
Order Bessel and Square Root Raised Cosine (SRRC), " = 0.25, 0.5 and 1, to observe the
effect of pulse shaping on bandwidth and SER. Ideal data are defined as having a perfect
symmetry, i.e., the duration of a digit one (1) is equal to the time duration of a digit zero
(0) and it also has a perfect data balance, i.e., the probability of getting a zero is equal to
the probability of getting a 1 (Pr(0) = Pr(1) = 0.5 or 50%). For non-ideal data these two
conditions, data symmetry and data balance, are not respected. The CCSDS limits are
± 2% for data asymmetry and a data imbalance of 0.45 (probability of getting a 1 vs
probability of a 0) as mentioned in [1]. If non-ideal data are present (i.e., the mean value or
expected value, ma , of the signal is not equal to 0), the Power Spectrum Density (PSD) of
the digital signal will then consist not only of a continuous spectrum that depends on the
pulse-shape spectrum of the signal data (rectangular pulse for NRZ-L), but will also
contain spectral lines (delta functions or spurious emissions) spaced at approximately the
harmonics of the symbol rate, Rs . The PSD equation of a baseband digital signal for the
case of uncorrelated data is derived in [2]. To complete this work, the relationship between
the non-constant envelope introduced by pulse shaping the 8PSK signal and the bandwidth
of the filter was to be investigated. Note that many analyses were performed on pulse
shaping, non-constant envelope and non-linear channels, but none of them involved the
measurement of the effect of a non-constant envelope signal (8PSK for this case) through a
non-linear channel (SSPA).
First a general SPW block diagram used for the simulations will be described. Afterwards
the results of the simulations with respect to the power containment, spurious emissions,
SER and non-constant envelope effect on the bandwidth for different types of spectrum
shaping filters are given. Finally conclusions of the work on 8PSK conducted at NMSU
are given at the end of this paper.
COMPUTER SIMULATION TEST SYSTEM
First a Data Source or Modulator was used to produce the 8PSK signal. The data source
also contained a block that could produce data asymmetry on the data. All simulations
were produced at baseband using the complex envelope signal representation. The results
do not vary if the simulations are done in baseband or passband. The advantage of going to
baseband is the computer execution time, i.e., it takes longer to simulate at passband since
the sampling frequency must be at least twice the Nyquist rate. The next block after the
modulator is the baseband spectrum shaping filter. The three types of filters that were used
for these simulations were 5th Order Butterworth, 3rd Order Bessel, and finally Square
Root Raised Cosine (SRRC) with Sampled Data and Roll-Off factors of 0.25, 0.5 and 1.
The next blocks include the Solid State Power Amplifier (SSPA), a bandlimiting filter
and the noise (channel). Simulations were performed using the SSPA at the saturation level
to maximize the power. NMSU used the SSPA model for their simulation which is based
upon specifications provided by the European Space Agency (ESA) for their 10 Watts,
solid state, S-band power amplifier. This amplifier was followed by a 2nd Harmonic filter
(4th Order Butterworth with a bandwidth of ± 20Rs) which reduces the interference
between different channels. This bandlimiting filter is followed by a variable Additive
White Gaussian Noise (AWGN) block. These blocks were followed by the Receiver,
Symbol Synchronizer and finally the Error Estimator. A matched filter was used as the
receiver for the Butterworth and Bessel Filters. The matched filter was matched to the
NRZ-L baseband data. For the SRRC, the receiver was also a SRRC to minimize the ISI.
The SRRC will minimize the ISI and the matched filter is an optimum filter which
optimizes the Signal-to-Noise ratio (SNR). The synchronizer that was used consisted in a
Delay & Phase Meter that would correlate the initial data with the data that went through
the channel. After these two signals went through the Delay & Phase Meter, the initial data
were delayed to be synchronized with the distorted data (the delay between the initial and
distorted data were caused by the filters). After the data were synchronized, an Error Rate
Estimator was used to measure the differences in symbols and calculate the SER.
SIMULATION RESULTS
Note that at approximately ± 2RB in Figure 3, the filter attenuates the data sidebands by
approximately 40 dB. Also due to the non-constant envelope of the data (since some
spectrum shaping was performed), the output of the SSPA is quite different from Figure 2.
In fact, it seems like the SSPA is trying to recreate the sidelobes that were attenuated by
the filter. Also, the spurious emissions encountered in simulations with the unfiltered case
and non-ideal data case are not present or are fairly attenuated. It was found that in-band
spurious emissions are more present and more evident for the Bessel Filter than the
Butterworth Filter (Bandwidth-symbol Time product : BT=1) (refer to [4]). With respect to
the sideband attenuation, it was found that the values of attenuation for the 3rd Order
Bessel are comparable to the 5th Order Butterworth. For SRRC filters with " = 0.25 and
" = 0.5, the bandwidth is narrower than the Butterworth and Bessel Filters but the
attenuation is less at high frequencies. Nonetheless, the absence of spurious emissions is a
net advantage. For SRRC " = 1, the bandwidth is wider than the two previous SRRC
filters and the absence of in-band spurious emissions was again noticed. Less sideband
attenuation was recorded for this roll-off factor compared with " = 0.25 and " = 0.5. To
see the overall effect of spectrum shaping a frequency band Utilization Ratio (D) was
defined as :
This ratio is an estimate of how many more spacecraft using a specific frequency band, can
be included in the bandwidth using spectrum shaping filtering as opposed to no filtering.
The following assumption in deriving this ratio was made: the spectra from spacecraft in
adjacent channels will be permitted to overlap one another provided that, at the frequency
where the overlap occurs, the signals are at least 50 dB below that of the main telemetry
lobe (1st data sideband). Using SPW, measurements were made using unfiltered and
non-ideal data to determine the frequency at which the data spectrum was 50 dB below the
main lobe. Power Spectrum plots like the one shown earlier were used to determine this
50 dB level. For example, for ideal data without a spectrum shaping filter the -50dB point
is situated at approximately 35 RB (extrapolation of Figure 2) and using the Butterworth
Filter (BT=1) with Ideal Data, the -50 dB point is approximately at 2.5 RB (refer to
Figure 3), then the Utilization ratio, D, is equal to
Note that the way the ratio is calculated in this example is different from the formula given
previously but the result is the same i.e. the number of spacecraft with filtering
accommodated in the frequency band will be larger than if it was not filtered. Table 1
summarizes the calculations of the Utilization ratio, D, for the spectrum shaping filters that
were used.
From Table 1 it is obvious that baseband filtering offers a significant improvement in the
number of spacecraft operating in a frequency band. In fact by baseband filtering, the
bandwidth utilization can increase by a factor of 12 to 24.
End-to-End System Performance: Symbol Error Rate S(ER) : To measure the End-to-End
system performance, the SER was measured for different values of average symbol energy
to noise ratios (Es/No). Such a measurement was performed using SPW and the system
described previously. The three types of baseband filters described earlier were used. To
choose the best filter from such a study one selects the filter that has a BT product such
that the filter ISI loss is < 0.4 dB (refer to [3]). To obtain a measure of the filter ISI loss, a
FILTER TYPE Ideal Data Ideal Data Non-Ideal Data Non-Ideal
-50 dB pt. Util. Ratio (D) -50 dB pt. Data
Util. Ratio (D)
None, Unfiltered Data (Reference 35 RB 1 40 RB 1
Butterworth, 5th Order (BT=1) 2.5 RB 14 2.3 RB 17.39
Bessel, 3rd Order (BT=1) 2.95 RB 11.86 2.5 RB 16
Square Root Raised Cosine 1.7 RB 20.59 1.7 RB 23.5
(" = 0.25) Sampled Data
Square Root Raised Cosine 1.8 RB 19.4 1.7 RB 23.5
(" = 0.5) Sampled Data
Square Root Raised Cosine 2.5 RB 14 2.2 RB 18.18
" = 1) Sampled Data
reference needed to be established. An ideal linear system is used where ideal data and
hardware are utilized to provide a reference for comparisons with the system being studied
(a plot of Probability of error versus Es/No is derived and called Theoretical plot in this
case for 8PSK [2]). Figures 4, 5 and 6, respectively, show the SER for 8PSK: 5th Order
Butterworth BT=1, 2 and 3 (SSPA at Saturation & Ideal-Data), SER for 8PSK: 3rd Order
Bessel BT=1, 2 and 3 (SSPA at Saturation & Ideal-Data) and SER for 8PSK: SRRC "=l
(no SSPA), "=0.25, 0.5 and I (SSPA at Saturation & Ideal-Data). From these simulations
it is then possible to determine the ISI loss due to the system. This is found by taking the
difference between the Es/No values measured with and without the filter. As indicated in
[3], the losses are measured over values of 10-3 # SER # 10-2 which is the normal
operating region for CCSDS encoded data. Losses are then tabulated for BT =1,2,3 (see
Table 2). Also as indicated in that same report the optimal filter is found by selecting the
lowest BT providing acceptable ISI loss (in this case a threshold of ISI loss < 0.4 dB). The
following table gives the results of the baseband filter ISI losses at 10-3 SER using
Figures 4, 5 and 6:
Note that only the loss of the SRRC with "=1.0 is shown in the table since the other
SRRC filters have higher values of losses (as shown in Figure 6) and their bandwidths are
much smaller than BT=1 therefore they can not be compared with the Butterworth or
Bessel filters. It must be emphasized that these simulations were performed with ideal data
therefore since these values of BT barely meet the ISI < 0.4 requirement, with a non-ideal
system this threshold would not be met (add approximately 0.5 to 1.0 dB to the values
FILTER TYPE LOSS (BT=1) LOSS (BT=2) LOSS (BT=3)
(dB) (dB) (dB)
Butterworth 5th 17.86 - 15.68 = 2.18 16.29 -15.68 = 0.61 16.05 -15.68 = 0.37
Order
Bessel 3rd Order 17.52 - 15.68 = 1.84 16.97 - 15.68 = 1.29 16.09 - 15.68 = 0.41
SRRC ("=1.0) 16.50 -15.68 = 0.82 ---------- ------------
Non-Constant Envelope : Pulse shaping gives a smaller bandwidth but can produce a
non-constant envelope which reduces the performance of the communication system. On
the other hand, no pulse shaping gives a larger BW but the signal has a constant envelope
which is a plus on the performance of the receiver filter. The trade-off between constant
envelope and bandwidth (BW) then is worth investigating. The bandwidth and envelope
variation were measured for different types of pulse shaping to observe the relationship
betweem these two parameters. To analyze the effect of non-constant envelope going
through the SSPA, different simulations where conducted on SPW with the 3 types of
filters mentioned earlier. It was noted that the non-constant envelope going through the
SSPA (at saturation level ) creates a rotation (AM-PM conversion) and a spreading of the
symbols (AM-AM conversion) as expected. Figures 7, 8 and 9 show the effect of varying
the non-constant envelope of a signal ( by using different BT or ") through an SSPA. The
Average Symbol Variance is used as a parameter indicating the spreading of the symbol
with respect to the mean for each of the 8 decision regions. For the 5th Order Butterworth
Filter in Figure 7, it is noted that the variance substantially decreases in an exponential-
sinusoidal manner. From BT= 1 to BT= 1.1, the values decrease rapidly which can be
explained by the fact that for BT=1, the filter is narrower and contains only the main lobe
of the power spectral density. Figure 8, shows the results of variance vs BT for the 3rd
Order Bessel Filter. Again as in the Butterworth case presented earlier, the average
variance gradually decreases in a sinusoidal form. The damping is more pronounced for the
3rd Order Bessel than the 5th Order Butterworth since the Bessel’s variance starts to
become a flat line around BT=1.8 but for the Butterworth Filter case the steady line
becomes present around BT=3. Also note that the average variance is appreciably less in
magnitude for the Bessel filter (order of magnitude . 10-4 ) than the Butterworth filter
(order of magnitude . 10-2) . Finally Figure 9 shows the average variance versus the roll-of
factor (") for the SRRC filter. In this case note that the aspect of the plot is really different
from the two previous filters. In fact, for the SRRC, the plot appears to have the aspect of
a parabola with a maximum at approximately "=0.45. Note that simulations could not be
performed for 0 # " < 0.1 due to SPW software limitations. Nonetheless, it can be seen
that the curve gradually decreases to 0 as " reaches 0. This can be explained by the fact
that for "=0, the frequency response of a SRRC is a pure square wave (or “brick wall”).
Thus in the time domain, it would then correspond to a pure sinc (sin(x)/x) function which
would have the zero crossings at the sampling frequency therefore eliminating any ISI. For
the case where " reaches 1, the average variance also decreases toward 0 (but it does not
reach it). This can be explained since the SRRC filter has a bigger BW for higher "
therefore allowing more of the power spectrum to be included in the transmission; this is
the same result as increasing the BT for the Butterworth or Bessel Filter. Finally note how
the magnitude of the average variance is larger for the SRRC filter than the Butterworth
and Bessel.
CONCLUSIONS
From the simulations on 8PSK Baseband Filtering performed in the Center for Space
Telemetering and Telecommunications Systems at NMSU, the following conclusions can
be made on the PSD, SER and Non-Constant Envelope. With respect to PSD and SER for
the 8PSK signal, in-band spurious emissions are more present and more evident for the
Bessel Filter than the Butterworth Filter (BT=1). With respect to the sideband attenuation,
it was found that the values of attenuation for the 3rd Order Bessel are comparable to the 5th
Order Butterworth. For SRRC filters with " = 0.25 and " = 0.5, the bandwidth is narrower
than the Butterworth and Bessel Filters but the attenuation is less at high frequencies.
Nonetheless, the absence of spurious emissions is a net advantage. For SRRC " = 1, the
bandwidth is wider than the two previous SRRC filters and the absence of in-band
spurious emissions was again noticed. Less sideband attenuation was recorded for this
roll-off factor compared with " = 0.25 and " = 0. 5. For SER, it was found that the
Butterworth and Bessel Filters just barely meet the threshold of ISI loss < 0.4 dB at
SER = 10-3. Also the SRRC filters do not meet this specification. Overall, it was shown by
using baseband filtering that the bandwidth utilization can be improved by a factor of
approximately 12 to 24 with BT=1 and 8PSK (see Table 1 in this report) which can
significantly increase the spectrum utilization.
ACKNOWLEDGEMENTS
The author would like to thank the National Aeronautics and Space Administration
(NASA) for their support under Grant # NAG 5-1491. Also a special thank to Dr. Sheila
Horan and the people in the Center for Space Telemetry and Telecommunications Systems
at New Mexico State University for their dedication towards Telemetry and Space-
Communications.
REFERENCES
[1] Warren L. Martin and Tien M. Nguyen, “CCSDC - SFCG Efficient Modulation
Methods Study, A Comparison of Modulation Schemes, Phase 2: Spectrum Shaping
(Response to SFCG Action Item 12/32),” SFCG Meeting, Rothenberg, Germany
14-23 September 1994, report dated August 1994.
[2] Leon W. Couch II, “Digital and Analog Communication Systems,” Macmillan
Publishing Company, 1993, Fourth Edition.
[3] Warren L. Martin, Tien M. Nguyen, Aseel Anabtawi, Sami M. Hinedi, Loc V. Lam
and Mazen M. Shihabi, “CCSDC - SFCG Efficient Modulation Methods Study
Phase 3: End-to-End System Performance,” Jet Propulsion Laboratory, May 1995.
[4] Rubén Caballero, “8PSK Signaling over Non-Linear Satellite Channels,” Masters
Thesis, New Mexico State University, May 1996.
CONCURRENT TELEMETRY PROCESSING TECHNIQUES
Jerry Clark
Lockheed Martin Telemetry & Instrumentation
ABSTRACT
Improved processing techniques, particularly with respect to parallel computing, are the
underlying focus in computer science, engineering, and industry today. Semiconductor
technology is fast approaching device physical limitations. Further advances in computing
performance in the near future will be realized by improved problem-solving approaches.
For instance, a communication system’s network bandwidth may not correspond to the
central processor speed or to module memory. Similarly, as Internet bandwidth is
consumed by modern multimedia applications, network interconnection is becoming a
major concern. Bottlenecks in a distributed environment are caused by network
interconnections and can be minimized by intelligently assigning processing tasks to
processing elements (PEs). Processing speeds are improved when architectures are
customized for a given algorithm.
Parallel processing techniques have been ineffective in most practical systems. The
coupling of algorithms to architectures has generally been problematic and inefficient.
Specific architectures have evolved to address the prospective processing improvements
promised by parallel processing. Real performance gains will be realized when sequential
algorithms are efficiently mapped to parallel architectures. Transforming sequential
algorithms to parallel representations utilizing linear dependence vector mapping and
subsequently configuring the interconnection network of a systolic array will be discussed
in this paper as one possible approach for improved algorithm/architecture symbiosis.
KEY WORDS
PROBLEM REPRESENTATION
DATA DEPENDENCIES
Variables within an algorithm represent physical data that must be stored during
processing. Commonly, variables can be mapped to a register or latch in a hardware
design. The relationship between these variables can be described temporally and spatially.
Temporal aspects of a variable refer to the data flow or sequence in which the specific data
is referenced. Spatial attributes refer to the variable scope or the visibility of a variable
within a process. Variable relationships, both temporal and spatial, are termed data
dependencies. Consider the following hypothetical algorithm composed of three statements
Si:
S1: A=B+C
S2: B=A+E
S3: A=A+B
Note that variable A is generated (output) in statement S1. Variable A is used (input)
within S2. Finally, A is both used (input) and generated (output) in S3. In general, the
variables on the left-hand side (LHS) of the assignment operator are the generated
variables and the elements on the right-hand side (RHS) of the assignment operator are the
used variables. Dependence vectors are created by subtracting the generated from the used
variables. Dependence vectors and matrices will be discussed later. For now, it is
important to see that various statements within a program have dependencies on other
statements. Figure 1 shows the various dependencies on a single variable A.
Figure 1. Dependencies on Variable A from Example 1 Algorithm
Seven dependencies exist for the three statements Si shown in Figure 1. Three basic
dependencies are used to analyze and transform a sequential algorithm: data flow
dependence, data anti-dependence, and data output dependence.
In Figure 2, d1, d2, and d3 are dependencies in which the generated variable (LHS) was
defined before another statement Si referenced this variable on its RHS. A simple arrow is
drawn from the generated variable to the used variable.
Data Anti-Dependence
Definition — Data anti-dependencies are variable relationships in which the used (RHS)
variable occurs temporally before it is defined or generated on the LSH. An analogy is
reading before writing (RBW).
In Figure 2, d4, d6, and d7 are by definition data anti-dependencies. An additional line is
added to the simple arrow from the generated to the used variable to denote this
dependency.
In Figure 2, only d5 is data output dependent. This dependence type is denoted between
the temporally disjointed generated variables by adding a circle to the connecting arrow.
DEPENDENCE VECTOR
D = I 2 - I1 (1)
where I2 is the index set for the used variables and I1 is the index set for the generated
variables. Dependence vector elements are simply integer values that indicate how many
iterations exist between the generated and used variables.
To compose a dependence vector D, first locate all pairs of generated/used variables. Then
subtract I1, from I2. Specific examples will be given and special cases will be discussed for
handling resultant dependence vector element signs.
Dependencies can be variable if one or more of the index sets I1 or I2 are generated from a
function. This type of dependency is referred to as a variable dependency and is not
addressed in this paper. Index vectors composed of simple integer values are termed
constant dependencies and produce a regular or repeatable pattern during algorithm
evaluation.
The basic trade-off that parallel designers face is balancing the algorithm performance
improvement (speedup) with the space required for the PEs (area) to implement the
transformed algorithm. Fundamentally, this is a temporal/spatial trade-off.
Transforms can be defined that take the dependence vector from sequential space to a new
partitioning in optimized parallel space. Index sets denoted by Jn are an nth-dimensional
index set of an algorithm with n nested loops. Each point in the nth-Cartesian space is a
distinct instance of the algorithm index set. Example 2 depicts an algorithm whose
generated and used variables are composed of a 3-dimensional vector or matrix derived
using the index set J3 = {(j0,j1,j2) 1# (j0,j1,j2) # N }
Used - Generated
d1 = (j0-1,j1+1,j2) - (j0,j1,j2) //on variable a
d2 = (j0-1,j1+1,j2+1) - (j0,j1,j2) //on variable b
d3 = (j0-1,j2+2) - (j0,j1,j2) //on variable b
d4 = (j0,j1-3,j2+2) - (j0,j1,j2) //on variable b
Now that a construct has been defined which models the variable dependencies within an
algorithm A, it is desirable to transform A to a parallel realization if possible. Let T be a
transform function that operates on dependence matrix D.
®n = T(Jn) (2)
= T(D) (3)
This implies that the first element in the columns of D, the first element of di, the
dependence vector, must be greater than 0.
TRANSFORMATION MATRIX T
The transformation function T is chosen to optimize both temporal and spatial elements for
a parallel architecture. This is the classic speed/area trade-off that designers must balance
when specifying a hardware or software solution.
The essence of index set transformation is to find a function that transforms a sequentially-
ordered index set into another index set on which a parallel-execution ordering exists [1].
T is composed of both temporal (A A) and spatial (S) elements.
(5)
Suppose we wish to find a transform T for Example 2. Using the dependence matrix D
Using equation (4) the conditions for elements tij are derived.
A d1 > 0:....................t11 - t12 > 0
A d2 > 0:....................t11 - t13 > 0
A d3 > 0:....................t11 + t12 - 2t13 > 0
A d4 > 0:....................t12 - 2t13 > 0
Several transformations A = [t11 t12 t13] can be found by hand or using a computer. Five
possible transforms are shown in Table 1.
Table 1. Selected A (time) Transformations for Example 2
The parallel computation time, tA, shown in the last column of Table 1, was computed
using equations 6 and 7 below,
(6)
(7)
I1 and I2 are the index sets for the used (RHS) and generated (LHS) in the sequential
algorithm A, respectively. max A is the largest temporal vector from A if more than one
vector is defined. For Example 2, max A = A5 = [2 0 -1]. min A is the smallest element of
AD. Using Table 1, the smallest element in the vector (2 3 4 2) is 2. Choosing A5 implies
that the sequential algorithm in Example 2 can be transformed using T. Parallel processing
time is tA = (3N-1)/2.
The sequential processing time, ts for Example 2 is simply the product of the end points in
the loop. ts = N * N * N = N 3. The processing improvement or speedup factor, Sp is defined
as the sequential processing time, ts divided by the parallel time, tA.
(8)
For Example 2, using A5, the speedup factor is:
Once an acceptable transformed dependence vector or matrix is found such that the
temporal and spatial parameters are optimized according to the parallel designer’s
specifications, the configurable hardware architecture must be composed. For the purposes
of this paper, a systolic array of distributed PEs will be assumed. Figure 3 depicts a
systolic array. The figure also shows a possible interconnect structure.
The interconnect network can be described by simple vectors. For instance, let P be a
2-dimensional vector: P = (pi, pj). Then a set of Pk(i,j) vectors can be specified that
describes the interconnect shown.
P implies that arrows connecting PE nodes N(j,i-1) and N(j,i) from Figure 3 may be
described by p2. Similarly, rows connecting nodes N(i,j-1) and N(i,j) are described by p1.
Note that although the systolic array is shown 2-dimensionally, the interconnect dimension
is not implicitly constrained. Any n-dimensional interconnect network can be formed from
P(p1, p2, p3...pk). Large interconnect array structures are much more difficult to visualize
than a regular array structure as shown in Figure 3.
Temporal parameter A for a selected T will specify the sequence in which each processing
element must be given algorithm A parameters to optimize temporal constraints. Spatial
parameter S is used to configure the systolic array with the optimal interconnect to reduce
the amount of system resources (area) required for the processing task.
The question now arises how to select the space transformation S such that the
transformed algorithm fits into a systolic array? This may be done by constraining the
modified data dependencies to match the systolic array interconnections. Figure 3 shows
the implicit equation:
P = SD (9)
The normal task faced by the designer is that D is derived from the algorithm. P is real
hardware that normally has a programmable interconnect structure with a finite
connectivity limit. The problem therefore is to find S. The actual interconnect may differ
from the configuration parameters of P. Thus, equation (9) is modified to
SD = PK (10)
and K is defined as the utilization matrix used to scale P to the correct dimension.
0 # kji (11)
1 # Ej kji # A di (12)
From Example 2 and Table 1, recall that A2 = ( 1 0 -1) was found to be a valid space
optimization vector in T. The transformed dependence vector A2 is =A AD = [1 2 3 2 ].
Next, a computer is used to generate all permutations of the K matrix and find a T whose
A parameter matches A2. T is then constructed using equation (5). Finally, the transformed
dependence matrix is found using equation (3). The results are summarized for brevity.
The last two rows in are S. It should be clear that S and P are now identical since the
utilization matrix K was introduced. Thus, S specifies the correct interconnect matrix
shown in Figure 3. The shaded nodes represent systolic elements needed to both optimize
the area resources and increase processing performance. With the given information, a
table could be constructed permuting each original index set J3 = (j0 j1 j2).
Using equation (2) each of the ®3 elements may be computed. The first table element of
®3, ¯ 0 denotes that the time node N in the systolic array is active. ¯ 1 and ¯ 2 specify the
2-dimensional index into the systolic array identifying the actual node. For example, if the
original algorithm index set is J3 = (j0 j1 j2) = (1 1 1) the transformed index set ®3 is :
¯ 0 = 0, ¯ 1 = 3, and ¯ 2 = 1. So at time 0, Node (3, 1) is active. Other nodes may also be
active at time 0 and may be found from a permuted table or from equation (2) directly.
CONCLUSION
Proper algorithm partitioning and assignment to PEs can achieve significant performance
improvements. Example 2 demonstrated that simply assuming a loop index N= 10
produced a performance increase of 68 over a sequential implementation.
REFERENCES
[1] Moldovan, Dan I., “Parallel Processing: From Applications to Systems,” Morgan
Kaufmann Publishers, San Mateo, CA, 1993.
ANALYSIS OF THE EFFECTS OF SAMPLING SAMPLED DATA
ABSTRACT
The traditional use of active RC-type filters as anti-aliasing filters in Pulse Code
Modulation (PCM) systems is being replaced by the use of Digital Signal Processing
(DSP) filters, especially when performance requirements are tight and when operation over
a wide environmental temperature range is required. In order to keep systems more
flexible, it is often desired to let the DSP filters run asynchronous to the PCM sample
clock. This results in the PCM output signal being a sampling of the output of the DSP,
which is itself a sampling of the input signal. In the analysis of the PCM data, the signal
will have a periodic repeat of a previous sample, or a missing sample, depending on the
relative sampling rates of the DSP and the PCM. This paper analyzes what effects can be
expected in the analysis of the PCM data when these anomalies are present. Results are
presented which allow the telemetry engineer to make an effective value judgment based
on the type of filtering technology to be employed and on the desired system performance.
KEY WORDS
Digital Signal Processing (DSP), Digital Filters, Sampled Data, Data Sampling, Sampling.
INTRODUCTION
In a typical telemetering system, many transducers are signal conditioned to bring their
signal levels up to some standard level, then large numbers of these signals are multiplexed
into a single channel and encoded with an analog to digital encoder. In order for the
encoded signal to not have aliased signals introduced by the sampling technique, there
must not be any signal energy of sufficient level to be detected above one half the sampling
frequency. If the system noise is low enough and the transducer has an inherent mechanical
or electrical bandwidth that is below this point, then no further signal processing is
necessary. However, most signals have more bandwidth available than the desired half
sample rate, or they have out of band noise of a sufficient level to cause a problem.
In order to keep these signals from causing aliasing in the output, some type of additional
filtering is required on each channel. Traditionally this has been an analog six pole filter.
This gives 72 dB of attenuation at four times the 3 dB cutoff frequency, which is sufficient
for a 12 bit system. This filter then requires that sampling be done at eight times the filter
cutoff frequency to guarantee that no aliasing will occur. If the assumption is made that the
out of band energy is sufficiently below the maximum signal level ½ the sample rate, then
the full 72 dB of attenuation is not needed. It is then reasonable to sample at a slower rate,
such as five times the cutoff frequency. This last assumption has been found to be true in
many cases.
A newer approach to the problem has been to sample at a much higher rate than necessary
for aliasing, and only using a simple two pole filter to prevent aliasing. This is followed by
a DSP processor that digitally filters the signal to bring the bandwidth down to the desired
limits. The DSP has the added advantage of being able to inexpensively provide a much
higher order filter, such that sampling at just over the Nyquist rate of two times the highest
signal frequency is approached. This allows a reduction in total signal bandwidth or an
increase in the number of signals that can be monitored in a fixed bandwidth.
In order for the DSP filter to work best, it must be slaved to the system sample rate, so that
all high speed sampling before the DSP filter is at some multiple of the system sample rate.
One way to accomplish this is to limit the system sampling rate to a few values, thus
defeating the usefulness of the bandwidth reduction. A second method is to flag each
sample as to whether it is a new point or a repeat of a previously read point. This requires
the system to sample at a slightly higher rate then the DSP and for the ground station
computer to process the flagged data by deleting the redundant data. A third method is to
generate custom filter tables for each channel on the aircraft. This requires the logistics of
generating each filter table (typically 500 to 1000 coefficients per filter), keeping track of
each channel, and all updates.
Because of these problems, some designs are being done without synchronizing the system
sample rate to the filter sample rate. This introduces additional system errors, which is the
topic of this paper.
SYSTEM DESCRIPTION
The system being evaluated consists or one sample rate for the final filter stage of the
DSP, and a slightly higher sample rate for the DSP output to the system. From an analog
standpoint, this would be similar to using an analog filter followed by a sample and hold
amplifier sampling at a slightly lower and asynchronous rate to the system sample rate.
TEST METHOD
In order to study the problem, and try and put some magnitude to the problem, I first
generated correctly sampled sine waves at a fixed rate, then periodically doubled some
samples at a fixed repetition rate. The output of this process was then run through an FFT
to get a plot of the new signal and any new spurious signals present. This was repeated for
various initial sample rates and final sample rates to try and get some idea of how much of
a problem exists. All tests were done on a one Hertz signal in order to reduce the number
of variables. For any other frequency, linear scaling can be used.
The FFT used was windowed with a “minimum four term cosine” window, which gave the
best frequency resolution of the windows available to me.[1]
RESULTS
The data is summarized in table one. By observing the location of the error terms, it is
found that the error terms (Fe) appear at frequencies that are related to the input signal
(Fi), the DSP sample rate (Fd), and the output sample rate (Fo) as follows: Fe = (Fo - Fd)
+/- Fi . This gives error signals that can be both above and below the input signal
frequency. In fact, by careful alignment of the three terms in the equation, a DC term is
generated. The second major observation is that the signal levels are not insignificant,
ranging up to only 8 dB below the input signal. In fact, in the case of an input signal close
to the cutoff frequency, the error signal level is the highest, defeating one of the purposes
of using DSP filters, that of getting very sharp cutoff filters to reduce bandwidth of the
sampled output. The only two general observation on the spurious signal amplitudes are
that the closer in frequency the spurious signal is to the input signal the higher the
amplitude and the fewer the quantity of spurious signals the higher the amplitude.
To analyze the problem mathematically, first a model of the problem must be obtained.
Defining the terms as above, the sample and hold function is A*sin(n*2*B*Fi/Fd), where
“A” is the input signal peak amplitude and “n” is the DSP sample number (O<n<N-1) and
N is the total number of samples. This is multiplied by the delta or impulse function
*(t-n/Fo), which samples the first function at the higher Fo rate. A Fourier transform can
be done on this signal to get to the frequency domain. The final equation for the frequency
domain is:
X(f) = En = 0 to N-1 {A*sin(n*2*B*Fi/Fd)**(t-n/Fo)*exp(-j*2*B*f*n)}
Using the above equation, a set of sample data was operated on and analyzed using the
program “Matlab”. The same results were obtained as using the FFT function of the
“DSPworks” program.[2,pp10-30;3,pp48-51]
TABLE 1
SUMMARY OF TEST RESULTS
The limited tests that were run to check the usefulness of non-synchronized DSP filters in
Telemetry systems indicates that there can be serious consequences of using this practice. If
the signals being analyzed are to be analyzed for frequency content, there will be a large
blurring of the spectrum due to the inter-modulation of the input signal with the two sample
rates. Unless the input signal is known exactly, the removal of the output error signals would
prove to be extremely difficult, if not impossible. For this technique of non-synchronous
sampling to be of value, the frequency content of the output must not be of primary concern,
and the noise generated by the error signals must be of a level lower than the signals of
interest.
ACKNOWLEDGMENTS
I wish to thank my paper advisor, Dr. Nihat M. Bilgutay, for his guidance, and Mr. Fred
Mangino for coming up with the paper idea.
REFERENCES
1 DSP Works, Version 3.0, (program and instructions) Costa Mesa, CA: Momentum Data
Systems, 1995
3 Owen, Frank F. E., PCM and Digital Transmission Systems, McGraw-Hill Book
Company, 1982
DESIGN OF MULTI-PLATFORM CONTROL SOFTWARE FOR
TELEMETRY SYSTEMS
FARID MAHINI
Microdyne Corporation
491 Oak Road
Ocala, Florida 34472
ABSTRACT
This paper discusses the requirements and design of a multi-platform system software for
control, status, calibration and testing of a telemetry system.
KEYWORDS
INTRODUCTION
Today’s advanced telemetry systems are partially, if not completely, remote controllable.
Some systems are made up of distributed telemetry sites that are operated from remote
command centers. Other systems may require fault isolation and corrective measures. Yet
another system may require to run on more that one platform or to interface with other
platforms. Such a diversity in telemetry systems presents a difficult challenge for designing
control software that can be used for various systems. Microdyne Corporation has
designed SysApps, System Application Software, a multi-platform telemetry system
control software that can be deployed on a cost-effective mix of existing and new
platforms. The following section presents the design of a multi-platform control software
that can be adapted to accommodate many systems.
DESIGN CONSIDERATIONS
Control software that is capable of adapting to various systems must address the following
requirements:
• Modularity
• Multi-platform
• Distributed Remote Access
• Remote Transfer Rate
Every system is comprised of various equipment where each equipment requires its own
specific control interface. Non-Modular control software tends to be system dependent and
does not lend itself to changes in system configuration. In contrast, modular software is
made of an array of self contained modules. Each module performs all necessary tasks
related to a specific device. Therefore appropriate module can be installed to
accommodate any device changes in system configuration.
Off-the-shelf PC-based computers are used to control small telemetry systems. However,
larger systems employ RISC-based UNIX workstations to command and control their
system. Historically, control software designers were forced to design two separate
software programs; one for PC-based computers and another for UNIX computers which
led to long development schedules and larger financial burden. In superior designs, cross-
platform development tools are used to produce complete application portability across
operating systems and graphical user interfaces (GUI).
Historically the failure or success of remote control software has been the ability to
provide control and status at a rate consistent with equipment performance and end-user
requirements. In an attempt to provide some level of remote control, commercial software
has taken the approach of extending the application screen from local site to the remote
site. This technical approach limits the speed of the application and is not adequate for
auto tracking antenna control in which updates in the millisecond range is required.
To increase the update rate, the SysApps application runs identical software on both local
and remote platform. This approach allows the remote computer to update the application
graphics and limits the communications to updating character information located inside of
text boxes within the graphical display. This minimal information is converted to ASCII
characters which are then transmitted to the equipment location via microwave links, fiber
optics, land lines or satellite links.
The dominant limiting factor in a remote control software system is usually the baud rate,
in which the application transfers information from the local site to the remote site. This
factor challenges the designers to transfer a minimum amount of information with the
maximum effect.
Remote control and status is a major consideration in many industries and specially in the
telemetry industry as demonstrated by the ongoing RSA (Range Standardization and
Automation) program. Many telemetry systems have a software control and status system
in place and many more systems will be programmed to include software control and
status systems. In an attempt to provide an overall solution SysApps has been designed
and structured to accommodate:
• Any size system from a single receiver with a cupped dipole antenna to a major
tracking site configured with 30 plus telemetry receivers and multiple tracking stations.
• Configuration Management in the form of the system configuration control files and
mission logs. This electronic mission log automatically logs AOS (Acquisition of
Signal), LOS (Loss of Signal) and any real time system configuration change such as
receiver frequency.
DESCRIPTION
The Interface Controller interfaces with National Instruments IEEE-488 card and RS-232/RS-
422 serial ports. The Interface Controller module is the only interface to system equipment. This
module is the link between SysApps and other platform-dependent device drivers.
The Distributed Application Services (DAS) Controller is utilized when the telemetry site and
control center(s) are at different locations. The Sysapps software must be installed at both
telemetry site and control center in order to communicate with each other via the DAS
Controller. Use of platform-independent abstracts provides the means for two SysApps running
on different platforms to communicate with each other.
In many telemetry sites, configuration files are handed down from the command and control
center prior to any mission (such as TMATS format, chapter 9 of IRIG). These mission
configuration files contain programming parameters for various test, telemetry and tracking
instruments within the system. The instrument parameters can be RF frequency, IF bandwidth,
data bandwidth for a telemetry receiver, position and acquisition mode of a tracking antenna, data
format and data rate of a bit synchronizer, signal routing through a switch matrix, etc. The Test
Port Controller is responsible for parsing and extrapolating the mission configuration files (or
calibration files). The extracted programming parameters are then routed to the Task
Processor/Arbitrator for distribution.
The Task Processor/Arbitrator is responsible for maintaining all dynamic data exchange (DDE)
links with device modules, Interface Controller and DAS controller. Any communication with the
Interface Controller module is passed through the Task Processor/Arbitrator module. In the case
of remote access where distributed services is enabled, the Task Arbitrator diverts the Interface
Controller data link to DAS controller. In addition, the Task Processor handles all system
configuration, system setup, loading / saving log files and mission files, AGC logs and system
device inter-dependencies (master/slave).
SUMMARY
Telemetry control software must be able to evolve with ever changing telemetry system
designs and increasingly demanding requirements. The number of distributed telemetry
systems have grown and system designer need a software package that will meet their total
hands-off remote control requirement now and in future. Multi-platform software design
eliminates many incompatibilities during system design and provides an additional step
towards achieving standardization in remote control software design.
Figure 1
DIGITALLY RECORDED DATA REDUCTION ON A PC USING
CAPS 2.0
ABSTRACT
The Common Airborne Processing System (CAPS) provides a general purpose data
reduction capability for digitally recorded data on a PC. PCM or MIL−STD−1553 data can
be imported from a variety of sources into the CAPS standard file format. Parameter
dictionaries describing raw data structures and output product descriptions describing the
desired outputs can be created and edited from within CAPS. All of this functionality is
performed on an personal computer within the framework of the graphical user interface
provided by Microsoft Windows. CAPS has become the standard for digitally recorded
data reduction on a PC at Eglin AFB and many other sites worldwide. New features, such
as real-time inputs and graphical outputs, are being added to CAPS to make it an even
more productive data reduction tool.
KEY WORDS
INTRODUCTION
Traditionally, telemetry data reduction and analysis has been accomplished using a tightly
coupled collection of hardware and software components. The process of digitizing,
reducing and analyzing data on a large computer systems was often time consuming and
always costly. Replacing telemetry systems led to extensive data reduction software
rewrites or required new software development. Changes in data analysis processes again
necessitated software modifications. The advent of digital recorder technology has
eliminated errors induced during the transfer of information between the analog and digital
domains, but has done little to control software development costs. CAPS provides a
standardized, flexible, PC-based tool for meeting evolving data reduction and analysis
requirements. CAPS can thereby control software development costs while simultaneously
reducing analysis time.
CAPS was developed by the Air Force’s 96th Communications Group at Eglin AFB,
Florida for the Navy’s Airborne Instrumentation System (AIS). The AIS Test
Instrumentation Pod (TIP) is an AIM-9 type pod designed to support Operational Test and
Evaluation missions using tactical operational configured aircraft. The pod is designed for
MIL−STD−1553, Global Positioning System (GPS), and inertial sensor measurement,
acquisition and recording. Following a test mission, the TIP data can be immediately
downloaded from the pod to a portable PC and reduced using CAPS. After satisfying the
AIS requirements, CAPS was extended to accept additional data formats and added
capabilities, making it a powerful generic data reduction tool.
DESIGN
CAPS uses telemetry data descriptions and processing to perform extraction and
engineering unit (EU) conversion of data parameters. CAPS also supports specialized
post-EU conversion processing by chaining follow-on data processing programs to
individual output files. Output results can be obtained in several commercial off-the-shelf
(COTS) formats as well as virtually any user-defined ASCII or binary format. All user
interaction with CAPS is accomplished through the Microsoft Windows graphical user
interface (GUI). In order to use CAPS, three key ingredients are required:
• Instrumentation Data — the raw data from the user’s instrumentation pod, data
monitor card, or real-time device.
• Data Description — a description of the location and type of individual
instrumentation parameters within the data, usually called an Interface Control
Document (ICD).
• Description of Desired Results — the user’s idea of what the results should look
like, or the format that a follow-on program may require.
These three ingredients are combined and manipulated within CAPS to produce flexible
output products as shown in Figure 1. A master data description could be used many times
on data collections of the same type of instrumentation data. Or, the same type of output
products may be desired on several data collections. Conversely, multiple output products
may be desired from the same set of instrumentation data. All of these scenarios are
supported by CAPS.
The primary function of CAPS is to provide a general purpose data reduction capability for
the extraction and engineering unit conversion of either IRIG PCM Class I / II or MIL–
STD–1553 message data. CAPS currently imports data from the file formats listed in
Table 1. This list of supported file types can be quickly and easily expanded to satisfy user
requirements.
Figure 1 - CAPS Block Diagram
DataProbe STD files have an internal structure similar to CAPS standard files and can be
interchanged between these two software applications. Therefore, CAPS can accept data
directly from any software that currently produces STD files, such as the Standardized
MARS-II Analysis and Reduction Tool (SMART).
CAPS supports a variety of data formats, including bit-, byte-, and word-swapped data.
Parameters can be as long as 80 bits (stored internally as 64-bit double precision) and
concatenated from two separate word/bit positions in the raw data file. The specific data
types that CAPS currently supports include: Unsigned Integer, 2's Complement Integer,
Signed Magnitude Integer, VAX Floating Point Numbers, ASCII Character, Inverted 2's
Complement Integer, IEEE Floating Point Number, MIL–STD–1750A Floating Point
Number, AAMP Floating Point Number, and Binary Coded Decimal.
Each parameter can have an unlimited number of dependent parameters with a logical
“OR” performed on the test conditions. Dependent parameters can, in turn, have
dependencies in order to support a logical “AND” test.
CAPS can produce virtually any output file format with user-defined ASCII and binary
Output Product Descriptions (OPDs). Several pre-defined output formats are also available
for ease of use with commercial of-the-shelf (COTS) tools including ASCII, ASCII
Comma Separated Values (CSV), ASCII with Column Titles, Binary, Lotus (.wks), dBase
(.dbf), and MATLAB (.mat).
During CAPS import of raw data files, time words are automatically modified to a 48-hour
format in order to avoid a false “time backup” at midnight rollover. Time can be formatted
for ASCII output in several pre-defined formats such as 24hr HH:MM:SS.sss, 24hr
HH:MM:SS.sssss, 24hr Integer Milliseconds, 48hr HH:MM:SS.sss, 48hr
HH:MM:SS.sssss, or 48hr Integer Milliseconds. For binary output files, time can be output
as 24hr Floating Seconds, 24hr Integer Milliseconds, 48hr Floating Seconds, or 48hr
Integer Milliseconds.
OPERATION
The primary user interface of CAPS is the Session window. A Session usually corresponds
to a specific data collection effort that is performed on a specific date. In order to produce
engineering unit outputs, a Session must define a raw data file, a dictionary, and one or
more output product descriptions (OPDs). The mission number, project number, mission
date and comments are recorded on the main Session window to help identify the Session
for future use. CAPS Sessions may be saved to a file and recalled at a later time. A sample
Session window is shown in Figure 2.
Figure 2 - Session Window
CAPS Sessions require raw data files to be in the CAPS standard format. CAPS provides a
facility to import data from a variety of other formats. Several options are available during
import such as specifying start/stop times, adding/subtracting a time bias and prompting
the user when time “jumps” or “backups” are detected.
A data file must have a corresponding parameter dictionary to describe the format of the
raw data. The dictionary is a low-level description of individual parameters within the
telemetry data. The dictionary controls the extraction and engineering unit conversion of
data. Dictionary editing is performed using the dictionary window as shown in Figure 3.
Output product descriptions (OPDs) describe the format or layout of CAPS outputs. The
format of the overall file output as well as individual variables is detailed in the OPD. An
OPD provides a description of the contents of individual records that make up a single
ASCII or binary result file. A raw data file can have several corresponding OPDs
producing multiple output files. OPDs are created and edited using the OPD Windows
shown in Figures 4 and 5.
Figure 3 - Dictionary Window
CAPS provides all of this functionality using a professional Windows 3.x graphical user
interface (GUI) as shown in Figure 6. The user provides input through standard dialogs
and windows and the menubar or toolbar. A statusbar and full on-line help is provided to
assist the user during operation. CAPS is a multiple document interface (MDI) Windows
application, allowing multiple windows open at the same time. This allows the user to
quickly “cut and paste” parameters from one dictionary to another or into an Output
Product Description.
FUTURE DEVELOPMENT
Portability and reusability issues continue to influence the design of CAPS. CAPS is
currently a 16-bit Windows 3.x application. Although the current version will run under
Windows NT/95 as a 16-bit application, a 32-bit version is being developed to take
advantage of features such as multi-threading, Object Linking and Embedding (OLE) 2.0,
and long filenames.
Additional usability features are also being designed, such as allowing multiple data
segment parameters and chaining algorithms together to provide derived parameters and
advanced mathematical calculations. A dictionary import facility to convert standard ICD
databases such as TMATS into CAPS dictionary format is a powerful tool envisioned for
CAPS. Additional data import modules are continually being added to the expanding list of
compatible CAPS data types.
Figure 6 - CAPS Main Window
The boldest addition currently being added to CAPS is the capability to accept real-time
inputs. This will allow CAPS to establish TCP/IP sockets and accept data from any device
on a network. Coupled with a graphical output capability, this feature will allow real-time
data-monitoring using CAPS.
The next major revision of CAPS (Version 2.0) is currently under development. This
release will be a full 32-bit Microsoft Foundation Class (MFC) application, optimized for
Windows NT/95, and contain many of the features discussed above.
CONCLUSION
CAPS has succeeded in its goal to provide an easy-to-use generic data reduction tool
available on a low-cost platform. It serves as a bridge from digital recording to data
analysis using a common graphical user interface. As telemetry processes are re-
engineered, the reusability of software such as CAPS will become invaluable. Continued
support and development will enhance its capabilities, making CAPS useful well into the
future. A demonstration version of the software is available to anyone who desires it. To
obtain a demonstration or full version of CAPS, contact Mr. Neal Urquhart at (DSN) 872-
8470 or (904) 882-8470.
REFERENCES
Rarick, Michael J. and Lawrence, Ben-z, Common Airborne Processing System (CAPS)
User’s Guide, Department of the Air Force, Eglin AFB, Florida, 1994.
Swan, Tom with Arnson, Robert and Cant , Marco, Object Windows 2.0 Programming,
Borland Press, New York, New York, 1994.
Desktop GPS Analyst
Standardized GPS Data Processing and Analysis
on a Personal Computer
ABSTRACT
In the last few years there has been a proliferation of GPS receivers and receiver
manufacturers. Couple this with a growing number of DoD test programs requiring
high accuracy Time-Space-Position-Information (TSPI) with diminishing test
support funds and/or needing a wide area, low altitude or surface tracking
capability. The Air Force Development Test Center (AFDTC) recognized the
growing requirements for using GPS in test programs and the need for a low cost,
portable TSPI processing capability which sparked the development of the
Desktop GPS Analyst. The Desktop GPS Analyst is a personal computer (PC)
based software application for the generation of GPS-based TSPI.
Keywords
DGA, PC, GPS, TSPI, BET, Windows, GUI, Visual Basic, C++.
INTRODUCTION
Precision accuracy requirements, decrease in test support funds, and wide area ,
low altitude or surface tracking of test items have all contributed to the migration
to Global Positioning System (GPS) instrumentation as the primary TSPI sensor
source. Today, single instrument GPS is capable of providing TSPI accuracy and
tracking coverage commensurate with multiple instrument best estimate of
trajectory solutions. The Air Force Development Test Center, Eglin AFB, 96
CG/SCW in conjunction with an Office of Secretary of Defense funded Airborne
Instrumentation System (AIS) developed by NAWC-WD China Lake, and a
Central Test and Evaluation Improvement Program, Test Technology and
Demonstration (TTD&D) effort have supported the development of a universal
GPS inertial aiding and differential correction program. Requirements for the
program demanded a tool that can be used by an operational tester without
extensive TSPI experience.
OVERVIEW
The Desktop GPS Analyst (DGA) software guides the user through a series of
software program steps that merge airborne and reference receiver GPS data for
the generation of TSPI data products. Inertially aided absolute TSPI solutions and
unaided or aided differentially corrected TSPI solutions can be obtained with this
software. The software allows the user to select receiver type to account for
differences in receivers due to manufacturer and/or measurement availability. This
approach permits creation of a standardized set of GPS measurement data that
can be processed using a state-of-the-art Square Root Information
Filter/Smoother (SRIF/S) as a Best Estimate of Trajectory (BET) algorithm. The
resulting TSPI solution can be exported to either custom user software or
commercial-off-the-shelf (COTS) software in either ASCII or binary formats.
The DGA graphical user interface was written using Visual Basic to allow rapid
prototyping of the overall approach. C++ was used to develop the support libraries
for handling file input/output, calculations and other functions where performance
or portability were issues. PC-ATDOP which contains the SRIF/S was developed
in FORTRAN and is executed on several platforms besides the PC including
Digital AXP and Cray computers.
GUIDED PROCESS
The initial task for the user is to extract from the raw data files the data to
used to develop the TSPI solution. This function is performed by a
separate software application CAPS. Clicking the CAPS icon causes DGA
to automatically create the Session (SES) and Output Product
Descriptions (OPD) necessary to perform all raw data extraction and engineering
unit conversions. In addition, it provides the user the path to the Data Dictionary
(DIC), which contains the bit level descriptions of all airborne and reference
receiver variables for DGA supported receiver types. Once these files are
constructed, DGA invokes the CAPS application for the user. Now the user is in
CAPS and must first convert the raw data file to a CAPS STD file, this
standardizes receiver message structure for the engineering unit conversion
functions, and second invokes the Session File to perform the actual engineering
unit conversions.
The user repeats the above process for both the airborne and reference receiver
data files before proceeding to the next processing step. The measurements
extracted during this step include airborne receiver navigation, pseudoranges and
optionally inertial data, as well as reference receiver differential corrections,
ephemeris and satellite position data. Together CAPS and DGA provide a
standardized method for extracting GPS data, and allow the expansion of DGA to
support new receiver types with minimal software development costs.
In addition to the setup dialog, PC-ATDOP was provided with sufficient robustness
to allow it to be used in the field by persons having minimal training in the
operation of this type software.
With the generation of the TSPI answer file or files in the user requested format
for use with other custom or COTS software the Guided Process is complete. If
the need to re-execute any steps in the sequence arises the option exists to reset
the guided sequence to the step where reprocessing is to begin and rerun the
affected steps. Also, CAPS and PC-ATDOP can be executed either by selecting
the appropriate program icon from the main DGA toolbar or totally outside DGA.
CONCLUSION
It is written for any 486 PC running PC-DOS or MS-DOS, Windows 3.1, Windows
95 or Windows NT and having sufficient free disk space. It uses standard
Windows controls to provide GUI behavior familiar to all PC users. The intended
audience ranges from novice users to TSPI specialists. For more information
concerning DGA, contact Mr Neal Urquhart at (DSN) 872-8470 or (904) 882-8470.
REFERENCES
Mission Analysis and Reporting System (MARS) Users Guide. Ken Burton & Neal
Urquhart, 96th Communications Group, AFDTC, Eglin AFB, FL. 1996.
PC- BASED S- BAND DOWN CONVERTER / FM TELEMETRY
RECEIVERS.
KEY WORDS :
Direct Digital Synthesis, Phase Locked Loop, Coaxial Resonator Oscillator, SAW
Oscillator, SAW Filter, Numerically Controlled Oscillator, PC- Based Down Converter
and PC- Based Receiver.
ABSTRACT :
In this paper design and development of a PC- Based S- Band Down Converter/ FM
Telemetry Receiver are discussed. With the advent of Direct Digital Synthesis (DDS) &
Phase Locked Loop (PLL) technology, availability of GaAs & Silicon MMICs, Coaxial
Resonator Oscillator (CRO), SAW Oscillator, SAW Filters and Ceramic Filters,
realisation of single card PC- Based Down Converter and Telemetry Receiver has become
a reality. With the availability of Direct Digital Synthesis and Phase Locked Loop devices
having microprocessor bus compatibility, opens up many application in Telemetry and
Telecommunications. In this paper design of local oscillator based on hybrid DDS & PLL
technique, Coaxial Resonator Oscillator and Front-end are discussed in detail.
It is an oscillator having analog output with a fine frequency resolution realised by digital
techniques. It basically consists of Numerically Controlled Oscillator (NCO), Digital to
Analog Converter(DAC), Low Pass Filter(alias filter) and a reference clock as shown in
fig(1). Its frequency is selected by changing the tuning word.
Fout = Frequency resolution × Digital tuning word.
The frequency resolution is given by,
Fresolution = Fclock ÷ 2n
where n is the length of the phase accumulator in NCO. DDS is realised using STEL
1172B NCO having 32 bit accumulator and a reference clock of 40.0MHz.
The phase locked loop as shown fig(1), consists of a Voltage Controlled Oscillator(VCO),
Dual modulus divider, phase/ Frequency Detector (PFD) and a loop filter. The VCO
output frequency is divided by a factor N and compared with a stable reference (Fref ).
Error voltage derived from the PFD is applied to the VCO after passing through a loop
filter, to stabilise the VCO frequency. In locked mode, the output frequency is Fout (Fref
×N)with the frequency resolution Fref.. Qualcom Q3216 PLL chip having 20 bit serial bus
interface alongwith prescalar UPB 584B chip is used in realising the first local oscillator of
Down Converter.Fig(1)
In this mode, the DDS output frequency is used as a reference to PLL as shown in Fig(1).
This hybrid DDS/ PLL output frequency is varied by varying both N and DDS output
frequency. The hybrid DDS and PLL output frequency, Fout is the product of N and DDS
output frequency. The advantages of this mode are given below.
* Both higher frequency comparison (–4.50MHz) at the phase detector and very high
frequency resolution (less than Hertz) can be achieved same time.
* Because a higher reference frequency is used, less noise is injected inside the loop band
width of the PLL.
* Because of higher reference frequency, wider loop band width can be used which results
in fast settling time.
The limitation of the DDS is the presence of spurious outputs. The spurious outputs near
the carrier can be minimised to 75dB down with respect to carrier if the selected clock is
minimum four times the DDS output frequency. Other spurious outputs will be removed by
using either highly selective crystal filter or ceramic filter for narrow band applications.
For continuous frequency coverage the required band width of the filter is,
B.WDDS š DDS centre frequency ÷ Nmin
The L.O -2 of S- Band Down converter and the LO-1 of S- Band FM Telemetry Receiver
are realised using this hybrid mode technique. Details of LO-2 are given below.
Tuning range : 470 to 570 MHz
Range of N : 104 to 127
DDS frequency : 4.5MHz ±0.045
Thus by varying DDS output frequency by ± 45KHz about centre frequency and VCO
frequency division ratio (N) from 104 to 127 the DDS /PLL output frequency can be tuned
from 470 to 570 MHz with resolution of 0.15Hz. The value of N and tuning word
corresponding to DDS frequency can be programmed respectively to the PLL and NCO
chips.
The Coaxial Resonator Oscillators at 1.8GHz for S- Band Down Converter and 2.05GHz
(100MHz tuning range) for S- Band FM Telemetry Receiver are designed using EEsof
software. A coaxial resonator is a transmission line of either quarter wavelength with a
short or a half wavelength with a open behaves as a parallel resonant circuit. The coaxial
resonator for a required frequency is selected using the software provided by the Trans-
Tech Inc, "Coaxial Resonator Design Program for Windows".
The 8/2 resonators SR 8800SPH 1850AY and SR 8800LPH 2175 of Trans- Tech Inc., are
chosen for 1.8GHz and 2.05GHz oscillators respectively. The active device chosen is NE
64535 bipolar microwave transistor. The small signal S-parameter of the transistor for a
bias of VCE = 8v and IC = 10mA are given are given below.
Freq.(MH) S11 S21 S12 S22
500 0.56p-122 12.34p110 0.03p41 0.58p-39
1000 0.52p-161 6.83p87 0.04p42 0.44p-42
2000 0.52p168 3.69p63 0.07p48 0.40p-49
3000 0.53p 146 2.49p42 0.10p47 0.40p-63
A feedback capacitor C1 is connected between emitter and ground and its value is chosen
so that the real parts of input and output impedance will have sufficient negative resistance
and lower reactance value. For a capacitance value. of 1.5pf, the S11 and S22 , Zin and Zout
are given below.
Freq (GHz) S11 S22 ZIN ZOUT
2.20 4.186p-56.80 2.53p-42.3 -59.26-j25.13 -73.80-j46.6
2.25 4.640p-61.76 2.74p-46.4 -56.59-j22.50 -68.70-j42.0
2.30 5.170p-67.90 2.98p-51.7 -54.00-j20.00 -63.60-j37.8
The resonator network consists of coaxial resonator (R2 , C3 , L1 ), GaAs varactor diode
ND5050-3D (C4 ) and coupling capacitors C2 & C5 as shown in fig(2).
Fig(2a) Fig(2b)
As the value of coupling capacitor C2 is increased the tuning range will increase at the
expense of Q. The value of C5 should be above a critical value, below which the circuit
will not oscillate. The output impedances are given below for the two oscillator circuits for
varactor capacitor values.
Varactor Freq. Output Varactor Freq. Output
Cap.(pf) (GHz) Imp.(S) Cap.(pf) (GHz) Imp.(S)
0.5 1.804 -195- j10 0.5 2.087 -175+j10
1 1.798 -203+j7.0 1 2.056 -186+j2
2 1.792 -200+j14 2 2.020 -177+j12
3 1.790 -197- j3.0 3 1.999 -178+j15
4 1.788 -196- j10 4 1.985 -180-j4.5
The circuit will oscillate with 50S load without the need of the output matching network,
because the magnitude of the output impedance is more than 50S. The CRO has become
increasingly popular upto 3.0GHz frequency range, where medium tuning bandwidth and
good stability are required. The tuning range of 16.0MHz[Fig(2a)] and 100MHz[Fig(2b)]
are achieved for coupling capacitor(C2) value of 0.5pf & 2.0pf respectively.
The overall S-parameters and noise figure of this stage are given below.
Freq. S11 S21 S12 S22 N.F
(GHz) (dB)
1.8 0.443p-092.8 14.30p66.68 0.044p18.25 0.382p-69.4 0.394
2.0 0.250p-132.4 14.00p38.10 0.048p-6.60 0.316p-087.2 0.332
2.2 0.127p145.4 13.27p11.22 0.050p-30.0 0.245p-102.9 0.322
2.4 0.223p065.0 12.23p-13.85 0.049p-51.7 0.181p-116.4 0.375
2.6 0.350p029.0 11.00p-37.30 0.048p-72.0 0.129p-128.0 0.493
Fig(3)
The second stage is a GaAs monolithic amplifier whose noise figure is about 2.4dB. GaAs
voltage variable attenuator used between LNA and first mixer to improve dynamic range.
The silicon bipolar MMIC active double balanced mixer/IF amplifier having gain of 8dB is
used as first and second mixers. The second mixer output (II IF,70MHz) is passed through
SAW filter, after amplification. The SAW filter with different bandwidth are connected
through SP4T switches and SAW filter is selected depending upon the data bandwidth of
the incoming signal.
The non-coherent AGC is applied to two attenuators to operate all stages in linear region
for maximum input signal strength of -30dBm. The local oscillator power requirement is
only -5.0dBm for both mixers. The block diagram of the S-Band Down Converter is shown
fig(4). This unit is mounted in environmentally safe enclosure near the antenna. The input
frequency and IF filter bandwidth are selected by PC, using custom developed application
program. By replacing CRO by varactor tuned DRO and by changing prescalar division
from 2 to 4/8 and front end by C/Ku band MMICs it can be configured to C/ku band
Down Converters.
FM TELEMETRY RECEIVER :
The block diagram os FM Telemetry Receiver is shown in Fig(5). The front end of the
receiver is similar to that of the S- Band Down Converter. The first L.O is realised using
hybrid DDS & PLL technique and second L.O is SAW oscillator and its frequency is
corrected using AFC loop. The input frequency is selectable to any frequency with in a
resolution of 100Hz in the band of 2.2 to 2.3GHz. It employs a modular architecture to
configure the receiver for different data rates, IF band width and FM deviation response. It
is integrated in to a custom developed application program. After the input frequency has
been selected, the control program provides both input signal strength and output levels for
the selected input frequency.
REFERENCES :
* "Design Consideration For Fast Switching PAL Synthesizers" - WJ. Tech-notes M/A
61991.by William C. Beam and Philip J. Rezin.
* "Introduction to Analog and Direct Digital Synthesis", RF Design, Jan. 1995,by Robert
Howald.
* "S-Band VCO Design Using Coaxial Resonator (CRO)", 1996 Asia Pacific MIcrowave
Conference, by S. Satyanarayana, J. Girija.
*"Design of a Low-noise Amplifier using HEMTS" - RF Design March 1994,
by S. Satyanarayana, VSSC, Trivandrum
ACKNOWLEDGMENT :
The authors express their gratitude towards Sri. N.K. AGARWAL, Head Electromagnetics
Division and Sri.S.B.R.SHENOY, Group Director,Electronic Group for their support
during this work. The authors also thank Dr. B. SURESH, Deputy Director, AVN and
Dr.S.SRINIVASAN, Director, VSSC,for their encouragement.
DISTRIBUTED INTERACTIVE SIMULATION: THE ANSWER TO
INTEROPERABLE TEST AND TRAINING INSTRUMENTATION
ABSTRACT
This paper discusses Global Positioning System (GPS) Range Applications Joint Program
Office (RAJPO) efforts to foster interoperability between airborne instrumentation, virtual
simulators, and constructive simulations using Distributed Interactive Simulation (DIS). In
the past, the testing and training communities developed separate airborne instrumentation
systems primarily because available technology couldn’t encompass both communities’
requirements. As budgets get smaller, as requirements merge, and as technology advances,
the separate systems can be used interoperably and possibly merged to meet common
requirements. Using DIS to bridge the gap between the RAJPO test instrumentation
system and the Air Combat Maneuvering Instrumentation (ACMI) training systems
provides a defacto system-level interoperable interface while giving both communities the
added benefits of interaction with the modeling and simulation world. The RAJPO leads
the test community in using DIS. RAJPO instrumentation has already supported training
exercises such as Roving Sands 95, Warfighter 95, and Combat Synthetic Test, Training,
and Assessment Range (STTAR) and major tests such as the Joint Advanced Distributed
Simulation (JADS) Joint Test and Evaluation (JT&E) program. Future efforts may include
support of Warrior Flag 97 and upgrading the Nellis No-Drop Bomb Scoring Ranges.
These exercises, combining the use of DIS and RAJPO instrumentation to date,
demonstrate how a single airborne system can be used successfully to support both test
and training requirements. The Air Combat Training System (ACTS) Program plans to
build interoperability through DIS into existing and future ACMI systems. The RAJPO is
committed to fostering interoperable airborne instrumentation systems as well as interfaces
to virtual and constructive systems in the modeling and simulation world. This
interoperability will provide a highly realistic combat training and test synthetic
environment enhancing the military’s ability to train its warfighters and test its advanced
weapon systems.
KEY WORDS
The testing and training communities have developed different instrumentation systems to
accomplish their mission requirements. Test requirements usually consist of a few aircraft
instrumented over a small airspace with highly accurate multiple time-space-position
information (TSPI) sources. In the past, the multiple TSPI sources may have included an
optical tracker, several tracking radars and a high sample rate telemetry system on the unit
being tested, which were later reduced and merged into a final highly precise data product.
As a result of the shrinking defense budget, the military has moved towards making
development tests more operationally representative. With the advent of GPS, this has
become much easier. GPS gives test ranges the capability to include such things as flying
over a larger airspace with more aircraft in more operationally-representative flight profiles
with no loss of pertinent data on the item or items under test. The RAJPO GPS-based test
instrumentation was designed to fulfill development test requirements. Training
requirements, on the other hand, consist of many aircraft over a larger airspace with lower
TSPI accuracy and lower telemetry rates. In addition, the training community requires real-
time weapon simulations with real-time display and control in order to create a realistic
combat environment. The ACTS Program Office multilateration-based ACMI system was
designed to fulfill training requirements. Using advancing technology of graphic displays
and computational processors, the RAJPO and ACTS systems took on similar subsystem
functionalities. Both systems consist of an airborne instrumentation pod, a data link
subsystem, a simulation/analysis subsystem and a display subsystem. See Figure 1 for a
typical RAJPO instrumentation system and Figure 2 for a typical ACMI instrumentation
system. Ongoing efforts are looking at ways to make the two systems common and
interoperable, and in the future, may eventually evolve to a single test and training system.
This paper discusses fostering current test and training system interoperability by taking
advantage of their subsystem similarities. These similarities provide a prime opportunity to
use DIS to make both systems interoperable at the system-level as well as provide an
interface to virtual and constructive simulations. For this paper, two or more systems are
determined interoperable if they can provide data to and accept data from each other, can
use the data so exchanged to enable them to operate effectively together.
Airborne Pod
Master Remote
Ground Stations
(MRGS)
Distributive
Interactive
Datalink Simulation (DIS)
Subsystem (DLS) Network
DISGOVR Software
Airborne Pod
Display &
Debriefing Tracking
Subsystem Instrumentation
Subsystem
Remote Unit
Tracking
Control & Instrumentation
Computation Subsystem
Subsystem Master Station
DIS is a standard that promotes the interaction and interoperability of virtual systems
(human-in-the-loop simulators), constructive systems (wargames), and live systems
(aircraft, ground vehicles, test and evaluation systems). It does this by defining the rules
and protocols for networking through standard computer interfaces, and defining a time
and space coherent synthetic representation of the real world. DIS has been especially
useful in the modeling and simulation world for interconnecting diverse virtual and
constructive simulation systems. Recent advances, GPS and advanced data link systems,
have allowed the real-time interaction of live airborne and ground participants within the
synthetic world. The RAJPO system added a DIS capability in 1994 as part of support for
Roving Sands 95. This project was called DISGOVR (DIS GPS Optimal Virtual Range)
and resulted in development of two software applications: the Live Entity Broker (LEB)
and the Live Entity VisualizeR (LEVR).
The LEB software modifies the RAJPO TSPI telemetry stream and converts it to DIS
entity state protocol data units (PDUs). The LEVR software is a three dimensional display
system that monitors DIS PDUs out of LEB and other DIS PDU sources.
Roving Sands 95 was a multinational joint tactical operations exercise using military
ranges in New Mexico, Texas, and South Colorado. Five F-15s and four F-16s were
instrumented using RAJPO instrumentation pods and four AH-64 helicopters were
instrumented using White Sands Missile Range’s (WSMR) Truth Data Acquisition
Recording and Display System (TDARDS) hardware. The DISGOVR software was used
to convert the TSPI data from the two systems (RAJPO and TDARDS) and insert these
aircraft into the synthetic battlefield representing the live exercise area. In addition, the
battlefield was populated with simulated virtual air defense assets (Patriot, Hawk, and
Avenger) and ground targets (SCUDs) that represented live assets. Several virtual man-in-
the-loop cockpit simulators and constructive simulations from around the country
participated in the synthetic battlefield.
Another effort supported by the RAJPO DIS capability was Warfighter 95. Warfighter 95
was an Advanced Distributed Simulation (ADS) exercise merging constructive simulation,
virtual simulators, and live fly aircraft to exercise battlestaffs, command and control
elements, and shooters in the conduct of theater air defense missions. Warfighter 95
employed a Korean scenario. The DISGOVR software was used to map two live F-16
aircraft in real-time into the virtual battlefield. While LEB converted TSPI to DIS PDUs, it
also translated the data to new coordinates that corresponded to the Korean peninsula.
LEVR, which was fitted with a virtual Korean terrain database, visualized the live, virtual,
and constructive participants. Both Roving Sands 95 and Warfighter 95 were outstanding
successes in using RAJPO and DIS to integrate live participants into a virtual battlefield.
Another effort conducted by the Army in early 1996 was called Combat Synthetic Test,
Training, and Assessment Range (STTAR). Combat STTAR conducted a deep strike
training mission at the National Training Center (NTC), Fort Irwin, CA. WSMR supported
the effort using RAJPO instrumentation mounted on six AH-64A helicopters and an
airborne telemetry relay via a C-12 fixed wing aircraft. The DISGOVR software was used
by WSMR to convert the live aircraft data to DIS PDUs so that the training mission could
be viewed by NTC, Ft Hood, and WSMR on their Virtual Reality Display System, thus
enhancing virtual reality situational awareness. An objective of this successful effort was
to enhance the operational environment to improve system testing while expanding the
opportunity to train the warfighter. There are more examples of recent successes in
interfacing live participants into a virtual world.
While we have mentioned examples of using test instrumentation and DIS for meeting
primarily training requirements, developmental and operational testing using DIS has not
been fully addressed. A special organization, the JADS Joint Test Force, was created to
investigate the utility of ADS for test and evaluation. They will conduct actual tests using
ADS to examine and quantify its utility. Some of these tests involve support from the
RAJPO and DISGOVR. One of the tests, involves linking a live shooter aircraft with a
hardware-in-the-loop simulation of an AMRAAM against a live target aircraft. RAJPO
equipment will be mounted on the two live aircraft and converted to DIS PDUs for
integration with the AMRAAM simulation for viewing both at Eglin AFB and Kirtland
AFB. This type of test may be effective and affordable for early integration testing and
help testers plan their limited live missile tests more effectively. Though the testing hasn’t
occurred yet, we believe DIS will be proven as a useful tool for increasing the
effectiveness of modeling and simulation for test purposes.
Ongoing RAJPO efforts, with cooperation from the ACTS community, are to permit use of
existing ACMI training and RAJPO test instrumentation systems on the same range using
DIS. This is in line with two separate efforts to integrate test and training. The first is the
Air Force’s New Vector Initiative for Modeling and Simulation (M&S). The Chief of Staff
and the Secretary of the Air Force directed emphasis be placed on M&S. This is based on
the realization that M&S is essential to the Air Force to organize, train, and equip its
forces. A key concept mentioned is the integration of live participants, human-in-the-loop
virtual simulators, and constructive simulations that can interact within a common synthetic
battlefield across distributed networks for both testing and training purposes.
The second is a recent Range Commanders Council initiative to integrate test and training
instrumentation. This initiative attempts to leverage current investments in airborne
instrumentation systems by identifying interfaces that allow common display of TSPI from
each system. The ultimate goal is a common test and training instrumentation system. The
Range Commanders Council is composed of high-ranking representatives from the Major
Range and Test Facility Bases (MRTFB) of the Army, Navy, and Air Force.
DIS BENEFITS
Other approaches being looked at for accomplishing the integration of test and training
include airborne pod commonality and direct interfaces on a range-by-range basis. Both
are longer term and higher cost alternatives compared to developing a DIS interface. The
RAJPO is working with the ACTS program to establish a DIS capability for their current
display subsystem and their existing and future simulation subsystems. This ACMI DIS
capability will allow interoperability with RAJPO instrumentation and many other
operational virtual training and test simulators and simulations. DIS interoperable and
compatible test and training instrumentation systems have the following benefits:
a. DIS is an established DoD and industry standard (IEEE 1278) and future systems
will be DIS compliant. The other services are all moving towards making their
systems DIS compatible. The Air Force needs to support DIS interoperability for its
systems and other systems throughout the DoD. This ability will significantly enhance
the ability to conduct joint training and meet joint testing objectives.
c. Tie together ranges anywhere a DIS training or test system exists for real-time
testing/training. Associated ranges for a common exercise can interact together via
DIS and not leave their home station. These ranges could also synthetically expand to
any number of different live or virtual participants. For instance, an Army range can
tie in tanks and soldiers with a Navy range that has aircraft, whose weapon systems
interact through DIS. These ranges can be geographically located side-by-side or a
continent apart.
The ACTS Program Office has been providing the DoD training ranges airborne
instrumentation since the mid 70's for aircraft tracking, weapon and threat simulation. This
instrumentation has been successfully used to train aircrews through live exercises such as
the Air Force’s Red Flag exercise at Nellis AFB. None of the current ACTS systems are
DIS compliant.
The Nellis Range could use DIS today. The Red Flag Measurement and Debriefing System
(RFMDS) is an airborne instrumentation system that supports real-time combat training for
up to 36 high activity aircraft with an assortment of weapon and threat simulation
capabilities. This system and its replacement, the Nellis Air Combat Training System
(NACTS), could enjoy many of the benefits discussed earlier in this paper having a DIS
interface. The RAJPO is committed to helping them realize the full potential of DIS using
a combined exercise and test to be conducted at Nellis AFB. The exercise, called Warrior
Flag 97, is an ADS exercise to provide multiservice air operations center training and Air
Force theater air control system battle management operator training. The test, called
Project Strike II, will exercise and evaluate the events of a sensor-to decision maker-to
shooter cycle against a time critical target. All live participants will be used for the test.
The operators will be trained using primarily DIS interfaced virtual and constructive
simulations from across the country with the Project Strike II test fulfilling the live portion.
The Nellis RFMDS system is being looked at to support this combined effort. The system
will need a DIS capability.
In addition to having the RFMDS, the Nellis Range controls several range facilities that
perform no-drop bomb scoring for the Air Force operational command, using ground radar.
A typical scenario has a B-52 flying over a range target and dropping simulated weapons.
When a crew member releases the simulated weapons, a tone is telemetered to the training
range. The range records the time of the tone and, using range radar TSPI of the aircraft,
calculates whether the bombs hit their intended target. The increasing precision of our air-
to-ground weapons requires better accuracies than can be provided using radar. In
addition, the radars are old and expensive to maintain. The Nellis Range Group would like
to replace the radars using current GPS-based airborne instrumentation and advanced data
link telemetry to provide highly accurate TSPI, low maintenance equipment, and a DIS
interface to conduct air crew debriefs at their home stations around the country. In
addition, these systems could integrate into a Red Flag or like exercise (assuming the
RFMDS/NACTS are DIS compliant) for air-to-ground drops.
SUMMARY
The time is right for the test and training communities to work together to accomplish the
DoD’s development test, operational test, and training requirements. The RAJPO has
already developed a DIS-compatible instrumentation system that has proven itself in
several training exercises. The modeling and simulation community supports the continued
use of DIS and the Range Commanders Council supports the integration of test and
training instrumentation. The RAJPO fully supports the ACTS community making existing
and future training instrumentation DIS compatible.
Making existing and future ACTS instrumentation systems DIS compatible would benefit
the Air Force immensely. Such a DIS capability would result in interoperability of existing
and future test and training airborne instrumentation systems as well as create a
standardized connection to other virtual and constructive systems. This capability would
clearly enhance the military services’ ability to train its warfighters and test its advanced
weapon systems.
REFERENCES
Pace, Rich, “The Pioneering of GPS-Based Range Programs,” Range Applications Joint
Program Office, Eglin AFB, FL, Jan 1995.
Whittemore, Tom, et. al., “Integrated Testing and Training Instrumentation, A Reality,”
Range Applications Joint Program Office, Eglin AFB, FL, April 96.
Wilson, Kris, et. al., “Virtual Defense: Real-time GPS in Simulated Military
Environments,” GPS World, September 1995, p 52.
“JADS, the Gateway to Reality” pamphlet, Joint Advanced Distributed Simulation Joint
Test Force, Kirtland AFB, NM, 1996.
Kirti Patel
Space system/Loral
3825 Fabian way
Palo Alto, CA 94303
ABSTRACT
With explosive growth in the satellite communication market. there is an increasing need
for the satellite network service providers to support many satellites with a common
Telemetry, Tracking, and Commanding (TT&C) assets. The open bus technology, and
Commercial Off The Shelf (COTS) Hardware and Software components, provides an
opportunity to build a common IF and baseband systems that will support many satellites
with different frequencies and protocols. However. the high frequency front end
components of the ground station such as antenna or HPA can not be common due to
different gain and polarization requirements of the various communication bands and
frequencies. The system architecture presented in this paper offers such system that is
interoperable and reconfigurable in near real-time to support multiple frequency and
multiple communication protocols.
Keywords
1.0 Introduction
There are numerous satellite systems, both military and commercial, employing Telemetry
Tracking & Commands (TT&C) ground stations. These TT&C systems will continue to
operate on various frequency bands determined originally (and usually many years ago) by
technical and regulatory factors. In addition to numerous frequencies, the TT&C
waveforms, necessary for telemetry, tracking, command and ranging functions, use
modulation techniques differing from one another depending on mission objectives,
security, and data rates. This study investigates the frequency bands and modulation
techniques (herein called protocols) used by selected satellite systems to see if there are
interoperable characteristics that will allow processing by a single state-of-the-art multi-
frequency, multi-protocol equipment string at ground stations.
A basic assumption is that only one downlink and uplink signal will be processed at a time.
Also, the diversity of Radio Frequencies (RF) used by the various TT&C systems indicates
that either a separate antenna is required for each frequency band or multiple feeds are
needed. We assumed a separate antenna for each frequency. The antenna systems are
different for each band because they are optimized (gain, polarization, etc.) for the
particular satellite system and frequency band. The associated transmit wave guide and
wave guide switches, if required, are different for each band because the size of the wave
guide is frequency dependent. Another assumption is that the chosen frequency bands
work with a common 70 MHz, first Intermediate Frequency (IF) for uplink and downlink
streams.
Because this was a feasibility study, no interface engineering to ascertain the ease of
interfacing this set of equipment was considered. Also, no implementation considerations
were made such as environmental, packaging etc. At present EHF was excluded because
there is no standard protocol in the larger space community.
The TT&C systems and their frequency bands listed in Table 2-1, were selected and
examined for this study. The satellite systems listed in Table 2-1, were chosen because
they represent proven and operational TT&C systems, and their frequencies are
representative of the heavily used frequency bands for satellite communications. SGLS is
the primary AF SCN TT&C vehicle, which supports many DoD satellites. The DSCS
system was chosen because it is an important user of X-band. The TDRSS network
supports NASA Low Earth Orbit (LEO) satellites and Space Shuttle flights. INTELSAT is
a commercial system using C-band as well as Ku-band. The NASA CCSDS system was
chosen because its the system for deep space missions providing TT&C on S and X-bands.
The study also examined the compatibility of the emerging telemetry standards and
frequencies such as CCSDS and new Ka band frequency (23 GHz-26 GHz), recommended
by the Space Network Interoperability Panel (SNIP) of NASA, with the existing
frequencies described in section 2.1. The following table provides the details on the
frequencies.
Table 2-1. Satellite Frequency Band Selection
In depth analysis of the frequency band of interest, signal structure, modulation and
demodulation at sub-carrier, and at carrier frequencies was conducted. The efforts were
concentrated to define the common elements such as base band data rates, data formats,
telemetry, ranging and commanding signal characteristics, the symbol rates, and first and
second IF frequency distribution among the selected frequency bands. The results of the
analysis were tabulated and used to develop Interoperable TT&C architecture.
The wave form protocol analysis for SGLS, STDN, and CCSDS formats was also
conducted. The SGLS, and STDN protocols apply to the physical layer of the transmission
link, therefore the TT&C equipment must comply with them. On the other hand the
CCSDS complies with the ISO 7 layer network model, and requires the application S/W at
the user’s workstation to process the packetize telemetry data. The error detection for
CCSDS protocol must be handled immediately after the TT&C Bit Synchronizer.
2.4 Interoperable TT&C Functional Architecture
The architecture developed for interoperable TT&C, shown in Figure 2-1, is quite different
from the traditional single stream TT&C systems. The major difference is that the new
architecture must support the common TT&C processing as well as the frequency specific
TT&C processing requirements. Accordingly, the architecture is partitioned in two
segments, see Figure 2-1. They are, 1.) the Frequency band Unique Element (FUE), which
includes the RF front end equipment for uplink and downlink, and 2.) the Common User
Element (CUE), which includes the RF Receiver Pool, the Modem Pool, and the Uplink
Signal Generator subsystems that are common to chosen frequency bands. The FUE of the
architecture includes a set of dedicated diplexers, ACUs, Up Converters, Down
Converters, RF Switches, LNAs, and HPAs that are replicated for each frequency band.
These are the stand alone black boxes which communicates to TCS processor via IEEE
488 I/F, or Discrete I/F. Since the FUE uses the standard black box components the
remaining of the paper will concentrate on CUE segment of the architecture.
The CUE of the architecture employs an open bus architecture that promotes the use of
“Plug and Use” concept. Under this concept, as new requirements are identified, the
necessary hardware and software modules can be added to the CUE with minimal changes
to provide additional capabilities. The example architecture only shows uplink and
downlink capabilities, and does not include any support equipment such as command echo
check or test transponders, required for pre-pass test and verification.
The architecture shown in Figure 2-2, supports a single telemetry data stream at a time,
and makes use of specific equipment in the FUE and the common H/W modules such as
Bit Syncs, Modems, and Command Generators in the CUE. However the architecture is
flexible enough to support simultaneous multiple telemetry data streams by simply adding
necessary H/W and S/W modules to the CUE. The proposed architecture supports S-band,
C-band, X-band, and Ku-band TT&C, and processes SGLS, STDN, and CCSDS
protocols.
The CUE of the TT&C architecture uses the dual bus architecture; 1) the independent
serial bus for telemetry, commanding, and ranging data that flow across the system through
8x8 switches, and 2) the VME64 bus to configure, control and monitor the performance of
the various components. The VME64 bus is an extension of the present VME bus and is
downward compatible to VME bus COTS H/W. The TT&C can be expanded by adding
the modules as long as the aggregate data rate on VME64 bus doesn’t exceed 80
Mbytes/sec.
S- Ba n d
F R EQ U EN C Y - BA N D U N I Q U E E L EM E N T S ( F U E ) C O M M O N U SE R
EL EM EN T S ( C U E)
C- B a n d
A n t e nn a D o wn RF Re c ei v e r
X-B an d D ip l e xe r s LN A s
Co n t ro l Co n v e r t e rs Poo l
K u -B an d
/ 4
/ 4
/ 4
A CU s
D at a In
Mo de m TLM Ou t To / Fr o m
Poo l CM DS In Us ers
Up U p lin k Sig
Sw it c h es H PA s
Co n v e r t e rs Gen Poo l
To TT& C Co m po n en t s
Tel em e t ry C on t ro l
& St a t u s ( TCS)
Pr oces sor
The open bus architecture of the TT&C system, and the programmability feature of the
various components allows standardization of some of the H/W and S/W components for
this architecture. For example, the 8x8 switch, the SBCs and the Real Time Operating
System (RTOS) are the common components that are used across the Uplink and
Downlink streams. Section 2.4.1 describes the functionality of the common components
used in the CUE.
The 8x8 VME Switch is used for analog and digital data routing within the RF Receiver
Pool, the Modem Pool, and the Uplink Signal Generator sub-systems. The typical switch is
configured in an 8x8 matrix using solid state cross points. The switch provides 40 dB
isolation between the adjacent channels at 100 MHz. The switch is controlled and
configured in real time from the SBC over the VME bus.
Figure 2-2. Interoperable TT&C System Detail Block Diagram
Single Board Computer (SBC) Card
The SBC card is a VME based component that uses COTS Central Processing Unit
(CPU). This is a general purpose processor card that has on-board RAM, ROM and
EEPROMs. The RTOS described in the following section provides a mechanism to
manage the on-board H/W and S/W resources, the external events, and the real time
execution of the application S/W. The SBC operates under the control of the TCS
processor, and reports the status back to the TCS via the IEEE 488 bus or over the Local
Area Network (LAN) via an ethernet I/F.
The local control port is provided to perform diagnostic and troubleshooting of the
subsystems at the VME module level.
The standard COTS RTOS provides basic operating structure to the SBC card which
manages the local on-board resources, controls the execution of application specific S/W,
and attends events external to the SBC. The RTOS required for this application will
include a pre-emptive multitasking scheduler. This will control the operations of the
various telemetry components, and control the execution of the application S/W residing in
the RF Receiver Pool, the Modem Pool, and the Up-link Signal Generator subsystems.
The traditional TT&C down link processes the telemetry signals in an analog fashion from
the moment the signal is received by the antenna until it propagates to the base band stage.
The proposed architecture in Figure 2-2 exploits the state-of-the-art DSP, DDS, and ADC
technologies, and starts processing the telemetry signals digitally at the 2nd IF stage
instead of the base band stage. In other words the telemetry signal that enters the RF
Receiver Pool in the analog form via he 8x8 Input Switch, is converted into serial digital
data streams by the telemetry and ranging receivers in the RF Receiver Pool, and
propagates digitally through the rest of the downlink chain. The use of DSP starting at the
2nd IF improves signal selectivity, sensitivity, signal to noise ratio, dynamic range, and
adjacent channel rejection. The following sections describe some of the architectural
characteristics of various frequency bands.
The CUE establishes the base for the Interoperable TT&C. The open bus modular
architecture together with the VME components and the 8x8 switches offer the necessary
flexibility to configure the system in near real time to support chosen frequency bands. The
following sections will describe the functional architecture of the CUE components in
detail.
RF Receiver Pool
The RF Receiver Pool performs satellite ranging and telemetry data receiving functions,
and includes Telemetry Receivers, Range Receivers, Single Board Computer (SBC), and
8x8 VME Switches
Telemetry Receivers
The telemetry receivers include Fixed Frequency, Tunable, and Phase Modulation (PM)
receiver that support S-band, Ku-band, X-band, and C-band telemetry streams. The
activation, configuration, control, data routing, and status monitoring of the telemetry
receivers will be initiated by the SBC under the control of the TCS processor. The
telemetry receivers accept the down converted 70 MHz IF signal, PM demodulates the
signals, and provides the sub-carrier signals for further processing to the Demodulator and
to the Bit Sync in the Modem Pool via the 8x8 Output Switch.
Range Receivers
Several range receivers are required to support the proposed architecture. They are PRN or
PN Range Receiver, Square Wave Sequential range receiver, and Tone range receiver.
The PRN/PN code range receivers, as shown in the block diagram, generate PRN code,
and check the valid receipt of the code. The receivers also generate a tracking error signal
for the Antenna Control Unit (ACU) to track the Satellite, and perform the doppler
correction. The TDRSS PN Receiver uses a group of PN codes, while the SGLS PRN
Receiver uses either a long or a short PRN code. The Tone Range Receiver generates 4
unique frequency tones, and measures the phase delay of the received tones to compute the
range of the satellite. The tone range receiver outputs the tracking correction, and performs
the doppler correction. The INTELSAT ranging signal uses the 4 tones, while the CCSDS
S-band ranging uses 8 tones.
Signal Routing
The signal routing for the RF Receiver Pool is controlled by the 8x8 Input switch and by
the 8x8 Output switch. The 8x8 Input switch routes the RF signal stream at first IF (70
MHz) to specific telemetry and range receivers, pre-determined by the type of the space
vehicle. The 8x8 Output switch routes the digital telemetry, the ranging data, and the
tracking error data to a specific demodulator in Modem Pool, to a specific command
modulator in the Uplink Signal generator, and to the specific ACU, respectively.
Modem Pool
The modem Pool subsystem includes Demodulator and Bit Sync, Simulators, Command
Processor, and SBC.
The Demodulator and Bit Sync group includes B/QPSK and PSK Demods, Bit Sync, and
Viterbi and CCSDS error decoders, and is capable of processing the digital telemetry data
streams for S-band, C-band, Ku-band, and X-band frequencies. The Demodulator and Bit
Sync can either process IRIG-106 PCM telemetry data or CCSDS telemetry data packets.
Table 2.4-3 illustrates the VME module utilization examples for processing the various
telemetry data types.
The CUE for the uplink stream includes the Command Generator in the Modem Pool and
the Uplink Signal Generator. The open bus modular architecture together with VME
components and the 8x8 switches offers the flexibility to configure the system in near real
time to support chosen frequency bands. The following sections describe the functional
architecture of each CUE component in detail.
Command Processor
The proposed architecture shows the command processor as a part of the Modem Bit sync
Pool, even though it generates the uplink commands. The command processor accepts user
TT&C commands, bit synchronizes the commands with the system clock, and modulates
the command with the sub-carrier frequency as required. The command processor includes
Command Generator, B/QPSK Modulator, Telecommand Unit/Telemetry Simulator
(TCU/TMS), and 8x8 Switch
Command Generator
The command generator accepts user command data and clock from the users of S-band,
X-band, and Ku-band frequencies. The user data is bit synchronized with system clock.
The output data is sent either to the 8x8 switch or to the B/QPSK modulator for further
processing. Data time tagging facility is also available if required.
B/QPSK Modulator
The B/QPSK Modulator accepts the formatted, and bit synchronized commands from the
Command Generator, and performs PSK modulations of the data with the sub-carrier
frequency. The output of this module is routed to the Uplink Signal Generator via the 8x8
switch. The B/QPSK Modulator processes TDRSS Ku-band, and CCSDS S-band and X-
band commanding signals.
TCU/TMS
The TCU/TMS unit accepts the INTELSAT Ku-band and C-band commanding data from
the user, and generates Frequency Shift Keyed (FSK) command tones for the Base Band
Switch (BBS) in the Uplink Signal Generator .
The Uplink Signal Generator accepts the formatted command streams, the BPSK
modulated command streams, or the FSK command tones from the Command Processor in
the Modem Pool; along with the PRN code, the ranging tone, or the square wave
sequential ranging signals from the RF Receiver Pool. The Uplink Signal Generator FM or
PM modulates the commanding and ranging signals on to the IF frequency, and routes the
70 MHz modulated IF signals to an up-converter via the 8x8 switch for conversion to
appropriate RF carrier frequency. The Uplink Signal Generator includes Command
Formatter, SGLS Modulator, Command Modulator, BBS, PM Modulator, FM Modulator,
SBC, and 8x8 Switch.
Command Formatter
The Command Formatter accepts Di-bits commands and outputs the ternary 1, 0, S, and
CLK signals to SGLS modulator. The Command Formatter is also capable of receiving
ternary echo check signals from the SGLS demodulator, and regenerates the command
stream for the test scenarios.
SGLS Modulator
The SGLS Modulator accepts ternary commands and produces 65 KHz, 76 KHz, and 95
KHz FSK tones for SGLS S-band, and for DSCS X-band frequencies. The FSK tones are
summed with the 1 Mbps digital PRN code and AM modulated to produce the composite
SGLS wave forms at the base band frequency. The composite output is routed to the PM
modulator via the 8x8 Switch.
Command Modulator
The Command Modulator accepts B/QPSK modulated command data and PSK modulates
with the forward carrier, and TDRSS PN code. The Command Modulator processes the
Ku-band, S-band, and X-band data streams and outputs the composite signal at 370 MHz
or 70 MHz.
The BBS accepts the INTELSAT uplink commands from the TCU/TMS via the 8x8
switch and multiplexes the commands with the range tones from the Tone Range Receiver
in the RF Receiver Pool. The BBS outputs the composite signal to the FM Modulator.
FM Modulator
The FM modulator accepts composite uplink signals from the BBS via the 8x8 switch, and
FM modulates them to a 70 MHz IF output.
PM Modulator
The PM modulator accepts composite FSK tones from the SGLS modulator or PSK bit
stream from the Command Modulator via the 8x8 switch and PM modulates the signal to a
70 MHz IF output.
Signal Routing
The 8x8 VME switch routes: (1) the composite FSK tones or PSK bit stream to either the
PM modulator, or to the FM modulator, and (2) the 70 MHz IF, FM or PM signal to a
specific up converter module.
The vendor survey of the TT&C market has revealed that various combinations of the
COTS components and COTS systems are available. The spectrum varies from the
components in black box or modular open bus architecture configurations to turn key
TT&C systems. However our survey has not found a TT&C system from a single vendor
that supports the typical ground station. This section will examine the TT&C COTS
components and systems and their applicability to the interoperable TT&C architecture.
The CUE consist of VME products. The vendor survey has provided data on available
TT&C VME products. Our analysis indicates that a significant number of telemetry
products, available now, will work in the proposed interoperable TT&C architecture. We
will also identify the components that are not currently available in VME configuration.
Where ever possible we will provide vendor supplied NRE cost to convert them into VME
products.
Downlink Component List
Table 2.5-1 presents a matrix of VME modules that can be used to implement the CUE
portion of the downlink architecture as shown in Figure 2-2. Most of the VME modules
listed in Table 2.5-1 are readily available the implementation at present time. However one
exception to this is the ranging receivers.
The Ranging Receiver are not used by all TT&C ground stations. In addition each
organization such as NASA TDRSS, NASA CCSDS, Air Force, and INTELSAT uses a
range receiver with some unique specification. Because of these differences, the demand
for each receiver’s type is low, and the vendors are discouraged from developing VME
based range receivers on their own. Four types of ranging receivers are used in the
proposed architecture. They are, 1.0) SGLS PRN Receiver, 2.0) TDRSS PN Receiver,
3.0) Tone Range Receiver, and 4.0) Square Wave Sequential Ranging Receiver. The VME
version for Tone range receiver is readily available, see Table-2.5-1. The TDRSS PN
Ranging Receiver is available in VME version, however it is designed to operate under
STGT environment. Some modifications may be required to meet the requirements of the
proposed architecture. The SGLS PRN Range Receiver is not available in VME version.
The NRE cost for VME range receiver development for SGLS could be approximately
$300K- $400K. Very little information is available on Square Wave.
The Table-2.5-2 presents a matrix of VME modules, that are available to implement the
uplink architecture.
Our market survey has indicated that, compare to downlink VME modules, fewer uplink
VME modules are available in the market at present time. This is due to the fact that
uplink capability is not required at every ground station, which reduces total required
quantity. Because of low market demand, the transition from black boxes to VME modules
for the uplink equipment will be slow and vendors probably will ask for NRE cost
assistance.
3.0 Conclusion
The Interoperable TT&C architecture shown in Figure 2-2 is implementable using the
equipment listed in Table 2.5-1 and Table 2.5-2. The open bus and modular nature of the
architecture offers the flexibility to support other frequency bands as well as protocols not
discussed here. The architecture promotes the use of COTS S/W such as RTOS and OOD
shell as the main core S/W to develop the application specific telemetry S/W and
integrated configuration database.
Most of the components in the RF Receiver Pool, the Modem Pool, and the Uplink Signal
Generator subsystems in the CUE are available in VME COTS hardware. However, the
Command Generator in the Modem Pool, and the Command Modulator and the Command
Formatter in the Uplink Signal Generator subsystem will require some modifications to
meet the Interoperable TT&C requirements. The stand alone version of PM Modulator will
require conversion into a VME card.
The Range Receiver function in the RF Receiver Pool requires further analysis to identify
opportunities for standardization that may allow development of a common Range
Receiver product with options usable across various frequency bands.
The FUE can not be implemented in a VME based architecture today due to lack of VME
H/W availability. The low market demand and challenges imposed by the high frequency
requirements of the RF components, does not offer enough incentive to vendors to pursue
VME productization at this time. However, the technologies such as GaAs and MMIC are
available today that can be used to convert stand alone boxes in the FUE into VME
modules. The multi channel Global Positioning System (GPS) receiver on a chip is an
excellent example of what these technologies can do if there is enough demand for these
types of products. Perhaps there will be enough market interest to develop these products
in the near future.
Table 2.5-1. Downlink VME Module Utilization List
Table 2.5-2. Uplink Component List
IMACCS: A PROGRESS REPORT ON NASA/GSFC'S COTS-BASED
GROUND DATA SYSTEMS, AND THEIR EXTENSION INTO NEW
DOMAINS
ABSTRACT
The Integrated Monitoring, Analysis, and Control COTS System (IMACCS), a system
providing real time satellite command and telemetry support, orbit and attitude
determination, events prediction, and data trending, was implemented in 90 days at NASA
Goddard Space Flight Center (GSFC) in 1995. This paper describes upgrades made to the
original commercial, off-the-shelf (COTS)-based prototype. These upgrades include
automation capability and spacecraft Integration and Testing (I&T) capability. A further
extension to the prototype is the establishment of a direct RF interface to a spacecraft. As
with the original prototype, all of these enhancements required lower staffing levels and
reduced schedules compared to custom system development approaches. The team's
approach to system development, including taking advantage of COTS and legacy
software, is also described
KEYWORDS
INTRODUCTION
Traditionally, ground data systems for NASA missions were built and integrated entirely
by civil servants and contractors. Institutions at NASA centers performed system
development and missions operations. Motivated by the desire to aggressively advance
space science goals despite shrinking budgets, NASA's approach to all aspects of missions
has changed, including ground data systems. Today, missions are conceived and flown in
response to Announcements of Opportunity that make the Principal Investigator (PI)
responsible for the allocation of funds. The PI can choose to get support wherever he
perceives the best value and as a result, NASA centers must compete with each other and
non-NASA institutions to provide satellite ground data systems.
At NASA's Goddard Space Flight Center (GSFC), the Mission Operations and Data
Systems Directorate (MO&DSD) is charged with building and operating ground systems.
Faced with the competitive challenge, MO&DSD sought to reengineer its business and
initiated the RENAISSANCE project to lead the way. At its inception in 1993,
RENAISSANCE had a modest goal: build an operational ground system in less than 1 year
for less than $5 million. Initial studies by the RENAISSANCE team led to an architecture
based on reusable building blocks, garnered from GSFC's legacy systems where possible
and built to be reusable (Stottlemyer et al., 1993). This approach was called the
RENAISSANCE first generation architecture. Shortly thereafter, NASA Director Goldin's
exhortation to "faster, better, cheaper" was taken to imply far more substantial changes.
The RENAISSANCE team responded with a second architecture that allowed for
extensive use of COTS hardware and software (Stottlemyer et al., 1996).
Indeed, in recent years, commercial off-the-shelf (COTS) hardware and software for
satellite applications has evolved considerably. COTS tools now surpass the functionality
of many custom-built systems and system components. The Eagle testbed, an outgrowth of
the CIGSS (CSC Integrated Ground Support System) COTS and legacy system integration
project of Computer Sciences Corporation (CSC) provides the experience base for CSC's
COTS integration work (Werking and Kulp, 1993; Pendley et al., June 1994). Several
other testbed projects, including the United States Air Force's (USAF) Center for Research
Support (CERES) (Montfort, 1995), the International Maritime Satellite (INMARSAT)
consortium, and the USAF Phillips Laboratory (Crowley, 1995) have produced successful
prototypes using COTS components. The Extreme Ultraviolet Explorer (EUVE) Science
Operations Center (SOC) at the University of California at Berkeley (Malina, 1994) has
adapted a COTS-based system to automate science instrument operations, resulting in
significant cost reductions.
In 1995 CSC, building on its COTS integration experience, proposed that NASA
Goddard’s RENAISSANCE team build a COTS-based prototype to demonstrate that
significant cost reductions were possible. The Integrated Monitoring, Analysis, and
Control COTS System (IMACCS), had the following goals: integrate a set of COTS tools,
connect them to live tracking and telemetry data, and reproduce the functions of an
operational ground system (Bracken et al., 1995). The target mission for IMACCS was the
Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) mission , one of the
spacecraft in GSFC's Small Explorer (SMEX) series. SAMPEX is a low earth orbiting
satellite in its fourth year of operational support. IMACCS was designed to replicate the
current real time command and telemetry flight and off-line support for SAMPEX. A time
limit of 90 days was imposed, and indeed, proved to be sufficient.
NASCOM Command
Load Command
Processor Management
System
• Frame Sync. • H&S Monitor
• CCSDS proc. • Control
LTIS 550 AMCS
Tracking Telemetry • Trending
Aquisition Data Data
Data BBN Probe
Flight Dynamics
Legend:
Orbit Attitude
RDProc MatLab Legacy
STK
PODS COTS
Figure 1. The IMACCS prototype contains six COTS software tools and three legacy tools.
A simplified block diagram of IMACCS is shown below in Figure 1. The COTS hardware
and software have capabilities that exceed SAMPEX operations requirements. One tool,
the Altair Mission Control System (AMCS), used on IMACCS for command and
telemetry, shows substantial promise for automating data monitoring and commanding.
CSC, through its Eagle testbed had prior experience with the AMCS and was familiar with
its capacity to perform automated operational support. The AMCS provides automation
through finite state modeling and state transitions (Wheal, 1993). State modeling and state
transitions proved to be easy to implement, and a set of initial state models was built.
Other features and capabilities of the IMACCS prototype are detailed in Bracken et al.
(1995).
A key characteristic of the initial IMACCS project was the speed with which it was
implemented, being fully functional 90 days after project start. This rapid turnaround on
the original implementation has been repeated for all of the extensions described in this
paper. For both new system development and major enhancement, being able to complete
major system lifecycle phases on this timescale of 3 months or less is essential for future
ground systems that must be delivered on reduced budgets and schedules.
IMACCS prototypes do not cover the entire traditional waterfall lifecycle. The major
phases of this lifecycle for ground system development are shown in Table 1. The phases
shown in this table are typical of standard methodologies used by NASA, DoD, and other
major institutions. Development of the original IMACCS prototype and its extensions
corresponds to the design phase through the testing phase. Because these prototypes have
been built in parallel with existing systems (or systems under development), they have
started from existing, and therefore stable, requirements. Similarly, because the prototypes
can be evaluated against operational systems in most cases, the testing needed to establish
their full requirements compliance is less than that required for a new system.
LIFECYCLE PHASE PRINCIPAL ACTIVITIES
Requirements Development Determine what the system is required to do and analyze
requirements for completeness, feasibility, testability, etc.
Requirements Analysis Determine system functionality and allocate functionality to high
level components.
Design Allocate functions to low-level components in detail and specify
interfaces in detail.
Implementation Construct components and integrate into the system.
Testing Verify that the system meets its requirements.
Operational Deployment Install system for operations and integrate with existing systems.
Operations and Maintenance Support system as it used operationally.
Nevertheless, the IMACCS lifecycle of 90 days (or less) for this subset of the traditional
lifecycle is still very favorable compared to the 12 to 24 month periods that have been
typical of the more traditional approach.
The tools used in IMACCS do not require extensive training to use, and enable a user to
directly implement a system function. The benefit of rapid development has been realized
by tools that enable experts in the spacecraft and operations domains to adapt the tools to
their needs without the intervention of experts from the software domain. User-intuitive
interfaces enable spacecraft and operations engineers, unfamiliar with the tool, to rapidly
customize the software. With appropriate COTS tools, individuals can easily develop
mastery of several packages, thus facilitating their integration.
Another factor in the rapid implementation of the IMACCS prototypes is the use of a
small, highly-empowered team of NASA civil servants and CSC engineers that has
working relationships with the COTS product vendors. Team members worked in close
proximity and a high degree of cooperation. Rapid progress, enhanced by visible results
from graphical interfaces of the available tools, accelerated the development pace.
IMACCS EXTENSIONS
Analysis of IMACCS operational functions (Pendley et al., November 1994) showed that
although IMACCS satisfied telemetry and tracking data processing, commanding, mission
planning, archiving and trending, and orbit and attitude determination functions, a number
of other mission operations functions were not addressed. Furthermore, automation in the
first prototype was restricted to real time data monitoring. The next step for IMACCS was
to prove that a COTS-based architecture could expand both functionality and automation.
Construction of the initial set of state models showed that the AMCS had substantial
capability not only to automate operator functions, but also to implement to highly
autonomous systems. Methods to automate off-line functions, such as orbit determination,
events computation, and acquisition data generation, have also been discovered. The
IMACCS team also investigated COTS alternatives to the NASCOM serial telemetry and
command interface between the antenna and the system. A COTS RF link, connected
directly to a test antenna at GSFC, was implemented and integrated with the IMACCS
system. Finally, IMACCS was extended to perform spacecraft integration and test (I&T)
functions.
Automation Our basic approach to automation was to take advantage of the capabilities
available in the products or ensembles of products that constituted IMACCS. Working
closely with the SAMPEX flight operations team, the IMACCS team developed five
categories of operations activities:
• Data monitoring
• Routine pass activities
• Known contingencies
• Emergencies
• Product generation
The IMACCS team automated data monitoring, routine pass activities, known
contingencies, and emergencies with the state modeling capability of the AMCS. These
four activities are driven by real time telemetry, and their automation is detailed in Klein et
al. (1996). For non-real time product generation, we needed a way to script the execution
of interactive, Xwindows-based programs, like Satellite Tool Kit (STK) from Analytical
Graphics. Our approach was to utilize a record-and-replay test tool, Xrunner from
Mercury Systems (Lin et al., 1996).
Radio Frequency Interface The original IMACCS received tracking and telemetry data
from, and sent commands to, SAMPEX through ground antennas located at Wallops
Island, Va.; Goldstone, Ca.; Madrid, Spain; and Canberra, Australia. These stations
communicate through the NASA Communi-cations Network (NASCOM) serial data
interfaces for both downlink and uplink. IMACCS used the Loral Test and Information
Systems LTIS550 front end to receive and transmit data via NASCOM.
Driven by the interest of some flight projects to control all their resources, and by the
success of IMACCS, MO&DSD sponsored integration of COTS RF equipment with the
IMACCS prototype (Butler, 1996). This system is shown in Figure 2. It utilizes a 4.3
Antenna Subsystem
Legacy RF Commands
Equipment NCPS
Range Doppler
(future)
TTL Clock/Data
SPAR (CCSDS frames) LTIS
Receiver, Demod,
Bit Sync, Decode
550 IMACCS
TTL Clock/Data
Modular COTS Rcvr. (CCSDS frames)
Receiver, Demod,
Bit Sync, Decode
Antenna Acquisition
Controller Data
Control
Terminal
meter dish at the Greenbelt test facility and a receiver being developed by Stanford
Telecommunications, Inc. under the under the sponsorship of Goddard Space Flight
Center. The multi-functional, Software-Programmable Advanced Receiver (SPAR) (Zillig
et al., 1995), couples advanced charge-coupled device (CCD) technology (developed by
MIT/Lincoln Laboratory with NASA sponsorship) and digital signal processor (DSP)
algorithms. SPAR will provide extremely flexible communications support for multiple
modulation format, including PM, PSK, and FM; PN spread and non spread signals;
integrated tone ranging of user spacecraft with 1-meter accuracy; and data rates up to 10
Mbps. For the RF interface test with IMACCS, only the receiver portion was used. The
complete IF-to-baseband data receiver comprises three standard 220 millimeter, 6U VME
cards: the IF module, CCD module, and DSP module. Communications and control
connectivity between each of the modules and a local PC controller is achieved using a 5
MBPS industrial ARCNET local area network standard.
The IF module accepts RF input from 370 to 500 MHz (selected for application both at
NASA's White Sands and GN ground stations) and at an input power level between -75
and -15 dBm. A key feature of the SPAR is an architecture that optimally leverages the use
of new, programmable CCD technology against a powerful, high-speed multiprocessor
arrangement of digital signal processors. Organized as a programmable discrete-time
analog transversal filter, the CCD technology employed alternately serves as the SPAR's
PN-code matched filter for spread spectrum applications and as an IF-symbol-matched
filter for non-spread applications. The resultant signal was passed to the LTIS front end,
bypassing NASCOM altogether. The IMACCS/RF system took SAMPEX passes and
tracked the spacecraft while monitoring states and telemetry. Other COTS products are
available and could have been used in this prototype, as has been demonstrated by JPL.
The integration demonstrated the feasibility of a complete, end-to-end COTS-based system
and generated excitement and interest among demonstration audiences.
Integration and Test (I&T) Spacecraft I&T system functionality substantially overlaps
operational ground system functionality, making it likely that a single system can be
tailored to perform both roles. GSFC’s RENAISSANCE team compared I&T
requirements with those of operational systems, and found this overlap in areas such as
data packing and unpacking, EU conversion, limit checking, and command and telemetry
database ingestion. They also found that I&T systems differ by requiring frequent database
updates and bit level data displays and command construction. I&T systems also derive
little benefit from automation of monitoring or commanding.
We identify three systems in the lifecycle: the Spacecraft Component Test System (SCTS),
the I&T System, and the Operational Ground Data System (GDS). The SCTS is a
collection of tools that evolve as satellite components are developed. The I&T system is
used to integrate and test the components into the complete spacecraft. The GDS is used to
fly the satellite. As each system hands off to the next in the lifecycle, the information
developed in the previous phase must be passed along. Traditionally these hand-offs have
required that the three lifecycle systems need to read some database representation of the
device parameters or have them input manually, and restructure the information for local
use.
The IMACCS team approached this expanded requirement set with the same COTS-based
architecture used in earlier prototypes. We obtained a spare component from the X-ray
Timing Explorer (XTE) mission and reproduced the functionality of its ground support
equipment using LabVIEW, a graphical instrument driver package made by National
Instruments. Using the same LabVIEW interface to support the integration with the rest of
the hardware, the new prototype populates the telemetry and command database, scripts
test scenarios, and attaches to a CORBA based network to get data from a variety of data
interfaces (the LTIS550, IP sockets, and direct 1553 connection). The operational system
reverses the database operation and uses its information to decommutate and convert the
data. The obvious advantage to a consistent architecture is that information can be passed
along from one phase to another without manual intervention or reformatting. Moreover,
users throughout the lifecycle are using similar, if not identical, interfaces to interact with
the same spacecraft object.
This architecture evolves smoothly from SCTS through I&T to GDS and bypasses the
inefficiencies and risks of data restructuring. IMACCS/I&T differs slightly from the
original IMACCS. It is based on PC platforms under Windows NT, because SCTS tools
should be on platforms used by spacecraft engineers. The IMACCS team is in the process
of implementing a CORBA interface to make the network seem transparent to all users
regardless of platform. This extension of IMACCS demonstrates that the benefits of a
common architecture now extend from component testing to end of life.
CONCLUSION
In the past, satellite missions required costly, custom-built systems because each new
mission advanced the state of spaceflight art. Near the end of the fourth decade of
spaceflight many more satellites are flown, and the domain of knowledge needed to
operate these vehicles is better bounded, allowing development of general purpose tools
and economies of scale. These tools are available as the kinds of COTS hardware and
software used in IMACCS. The use of COTS-based ground systems will expand as the
need for low cost, easily used and automated systems continues to increase. Future
missions must be flown economically, which requires that all phases in the mission life
cycle must be considered for development cost reduction and operational enhancement.
Future extensions of IMACCS will address all phases of the spacecraft lifecycle, from
system concept to end of life (from design to debris).
Within GSFC there are efforts being made to support these causes. The Landsat 7 mission
is now pursuing the use of state modeling to support the automation efforts of the ground
system using COTS tools validated on IMACCS and its extensions. NASA Goddard has
also accepted a proposal to replace the current Upper Atmosphere Research Satellite
(UARS) control center with a COTS-based system. The use of automation will be the
responsibility of the flight operations team. This team, with support from the members of
the IMACCS team will develop the state models to support the monitoring of the UARS
satellite and develop the pre-pass planning scripts that will be used to automate routine
commanding of the UARS satellite.
REFERENCES
Bracken, M. A., Hoge, S. L., Sary, C. W., Rashkin, R. M., Pendley, R. D., & Werking, R.
D., “IMACCS: An Operational, COTS-Based Ground Support System Proof-of-Concept
Project,” 1st International Symposium on Reducing the Cost of Spacecraft Ground
Systems and Operations, Rutherford Appleton Laboratory, Chilton, Oxfordshire, U. K. -
September, 1995.
Butler, M. J. et al., “A Revolutionary Approach for Providing Low-Cost Ground Data
Systems,” 4th International Symposium on Space Mission Operations, Munich, Germany -
September, 1996.
Klein, J. R. et al., “State Modeling and Pass Automation in Spacecraft Control,” 4th
International Symposium on Space Mission Operations, Munich, Germany - September,
1996.
Lin D. C., Klein, J. R., Pendley, R. D., & Hoge, S. L., “Use of Xrunner for Automation,”
4th International Symposium on Space Mission Operations, Munich, Germany -
September, 1996.
Montfort, R., “Center for Research Support, A New Acquisition Management Philosophy
for COTS-Based TT&C Systems,” National Security Industrial Association Symposium,
Sunnyvale, CA - August, 1995.
Pendley, R. D., Scheidker, E. J., & Werking, R. D., “An Integrated Satellite Ground
Support System,” 4th Annual CSC Technology Conference. Atlanta, GA - June, 1994.
Pendley, R. D., Scheidker, E. J., Levitt, D. S., Myers, C. R., & Werking, R. D.,
“Integration of a Satellite Ground Support System Based on Analysis of the Satellite
Ground Support Domain,” 3rd International Symposium on Space Mission Operations and
Ground Data Systems. Greenbelt, MD - November, 1994.
Stottlemyer, A. R., Jaworski, A., & Costa, S. R., “New Approaches to NASA Ground
Data Systems,” Proceedings of the 44th International Astronautical Congress, Q.4.404.
Graz, Austria - October, 1993.
Zillig, D. J. & Land, T., "GN Advanced Receiver Prototype II (GARP II) A Charged-
Coupled Device Programmable Integrated Receiver," Goddard Space Flight Center - 1995.
THE ROLE OF STANDARDS IN COTS INTEGRATION PROJECTS*
Alan R. Stottlemyer
Kevin M. Hassett
ABSTRACT
We have long used standards to guide the development process of software systems.
Standards such as POSIX, X-Windows, SQL have become part of the language of
software developers and have guided the coding of systems that are intended to be
portable and interoperable. Standards also have a role to play in the integration of
commercial off-the-shelf (COTS) products. At NASA’s Goddard Space Flight Center, we
have been participating on the Renaissance Team, a reengineering effort that has seen the
focus shift from custom-built systems to the use of COTS to satisfy prime mission
functions. As part of this effort, we developed a process that identified standards that are
applicable to the evaluation and integration of products and assessed how those standards
should be applied. Since the goal is to develop a set of standards that can be used to
instantiate systems of differing sizes and capabilities, the standards selected have been
broken into four areas: global integration standards, global development standards, mission
development standards, and mission integration standards. Each of the areas is less
restrictive than the preceding area in the standards that are allowed. This paper describes
the process that we used to select and categorize the standards to be applied to
Renaissance systems.
KEYWORDS
INTRODUCTION
Renaissance is an effort by the Goddard Space Flight Center (GSFC) Mission Operations
and Data Systems Directorate (MO&DSD) to define a reusable architecture for spacecraft
ground data systems. As discussed in Reference 1 and Reference 2, one of the major goals
of this architecture is to be flexible to be able to support very large missions, such as the
Earth Observing System (EOS) down to very small missions, such as those in the $8M to
$10M range.
*
This paper is declared a work of the U. S. Government and is not subject to copyright protection in the United States.
Existing standards for integration have addressed infrastructure levels, and provided little
definition for application level interfaces and functionality. Past experience demonstrated
that standardization at the application level frequently had the effect of severely limiting
flexibility and use of COTS components because the standards were not widely supported.
To achieve an architecture that is truly open and flexible enough to meet the needs of any
mission requires the freedom to choose computer platforms, information repository and
application products, and commercial mechanisms supporting application interconnection
and data transport. To support this freedom to choose, the Renaissance Team selected a
set of standards and defined their use to meet two purposes: provide requirements for the
selection of COTS products and provide an integration framework.
In a COTS integration project, the use of standards takes on a different role. Standards are
used to assure that the products to be integrated are based on a common framework that
facilitates integration. Standards become especially important in providing ground data
systems where the system design is instantiated multiple times, each time with variations.
In cases like these, judicious application of standards helps to promote reuse of products
across the multiple instantiations. The standards also make it easier to change out
components within the framework to meet cost and/or performance constraints. The
integration team need not be expert in the implementation of the standard. Instead the
implementation of the standard is the responsibility of the COTS vendor. The ground
system developer is responsible for determining which standards are important to the
system and working with the vendors to assure that the selected products meet the
standards. A major factor in achieving integration of COTS products and remaining
independent of any single vendor is to pick standards that are widely supported by
vendors.
(1) Framework Selection -- Standards define how systems are to be integrated. For
recurring systems, the lowest level of standards are consistently applied to all
instantiations of the system. For example, the Renaissance Team chose the Internet
Protocol (IP) as the basic standard for integration. All ground systems to be built to
Renaissance Standards will be based on IP.
(2) Product Selection -- Standards are very important in selecting products. Since
the standards primarily define how the different COTS products exchange data, then
the integration team should select products that support the same set of standards.
This will minimize the amount of interface code that needs to be developed.
(3) Development in support of integration -- In any COTS integration effort, some
interface software will need to be developed. The standards to be applied to the
development of this “glueware” is similar to standards for traditional development.
Since most COTS products will provide a programmer’s interface, the role of
standards in this case is primarily limited to the definition of the development
environment (language, target platform, etc.). For this case, the product is likely to
be the implementation of a target standard interface.
(4) Product Line Development -- In some instances, it may be appropriate for a
ground system integrator to develop some products as a specialized line. These
instances include the development of a product that meets a need that is not
provided by any COTS products, or the development of a product using a new
technology. In either case, the development team must adhere to the standards of the
organization, so that it is ensured that the new product will integrate with other
products selected for inclusion in a ground system.
Standards selection is both critical to success, and a potential obstacle. Standards are
needed to support changes in system configurations, but poorly supported standards may,
in fact, preclude access to off-the-shelf components. Often it is better not to pick a
standard, than to pick a standard that severely limits the product choices available to the
integrator.
The approach we followed to select standards was to match potential standards to the
objectives of the architecture, and assess the value and cost of applying a candidate
standard. In a COTS-based ground system architecture, standards serve as enabling
technologies for four critical objectives. The first objective is to maximize reuse of off-the-
shelf components. The second objective is to ensure that the many elements of a system
can be integrated using a minimum set of common interfaces. The third objective is to
support a high level of mission configuration flexibility by allowing free interchange of
both custom and commercial applications in the configuration of a mission system. The
fourth objective is to support development of software elements that address requirements
not met by off-the-shelf components, or that improve technology implementations. Given
the Renaissance approach to mission implementation as discussed in References 1 and 2,
this last objective implies both platform vendor independence and selection of application
interfaces appropriate for wide integration. The complete set of standards selected for the
Renaissance effort is included in Reference 3.
The approach used to establish the standards for the Renaissance Architecture was to
divide standards along two lines: a categorization that defines the role of the standard in
the architectural framework and a level that defines the applicability of the standard.
Figure 1 shows the categorization of standards. As can be seen, this categorization has two
dimensions: Global vs. Mission-Specific and Integration vs. Development.
Global Mission
Additional standards that are needed to
Standards to be applied universally for
help select products that meet the
Integration integrating all missions.
unique needs of a given mission.
Standards to be applied when Standards to be applied when
Development developing software to be part of the developing mission unique software.
MO&DSD product line.
Standards have also been separated into integration and development categories.
Integration standards are those which have the primary effect of standardizing interfaces
for both COTS and any developed components. Development standards are those which
primarily guide the development of mission components, either globally or for a specific
mission. The distinction between integration and development standards is important.
Integration standards are the most important ones to the selection of products. To
overspecify the standards to be applied by vendors in development could unnecessarily
eliminate a product from selection. For example, the ANSI C Language standard is
primarily for developers to ensure that code will compile across a wide variety of
platforms. However, for a COTS product, the real concern is that the product adhere to
standards which ensure that it will operate across a wide variety of platforms and integrate
with other products. Therefore, to insist that vendors only deliver software written in a
particular language would unnecessarily limit the available selection.
The first class of mandatory standards forms the core commonality around which the
architecture is founded. The OSI reference model was a means for identifying these. The
primary approach was centered on the Internet Protocol suite because of the wide support
in the commercial world for all levels of the suite. In particular, the IP (OSI layer 3) was
made mandatory, along with supporting protocols in OSI layers 3 and 4. Similar protocol
standards were identified for system management, and other application areas where
standards with near universal support exist. The overall intent was to keep the number of
mandatory standards limited to those with truly global support, and to select only those for
which there was a clear benefit to establishing a standard. The flexibility within the set has
been given at least preliminary trial in the prototyping environment.
The mandatory options for mission implementation are a list from which selections are
made for a given system, but not all options are supported within the system. The
distinction can be clarified by an example: the choice of OSI layers 1 and 2 for system
implementation. The options are many, e.g., Ethernet and FDDI, and standards exist for
each. A given mission system will not implement all possibilities, but rather those which
provide the needed capabilities for the least cost. Options are important because of
variations in capability between COTS products, but are only feasible where the
alternatives are available in COTS products.
The available options is a list of standards with reasonable support which offer significant
benefits, but also significant limitations. This list is intended simply as options, which may
or may not be chosen for areas not covered by the mandatory standards. Of particular
interest are relatively untried standards with limited implementations which might meet a
particular need of a mission.
Figure 2 shows the relationship between the categories and levels of standards. As can be
seen, most of the mission integration standards are mandatory, while the lowest level of
applicability is at the mission integration level, where standards must be chosen to meet
the specific needs of the mission.
Global Mission
Mandatory Options and Available
Integration Mostly Mandatory
Options
Mostly Mandatory Options and
Mandatory Options and Available
Development Available Options. However, must
Options
meet more than one of the options
Given that technology is always advancing, new standards may be added as necessary. To
minimize the risk of adding new standards, the categorization hierarchy will be used to
evolve standards. New standards may be experimented with at a mission level. If the
standard proves to be useful in a wider range of missions, then the standard will evolve to
the global level. Figure 3 shows an evolutionary approach that allows the introduction of
new standards and technology at a mission level and then migrates these standards to the
global level. New standards are always being introduced (1) and experimented with on
specific missions. If successful, these standards may be elevated to the global categories
(2). Finally, if a development standard gains wide favor with COTS vendors, it may be
elevated to an integration standard (3).
Global Mission
2 Experimental 1 New
Integration Global Standards
Standards Standards
3
2 Experimental 1 New
Development Global Standards
Standards Standards
CONCLUSION
We have developed an approach for selecting standards that supports our architectural
approach. To select the standards we looked for:
ACKNOWLEDGMENTS
REFERENCES
Timothy B. Bougan
Science Applications International Corporation
ABSTRACT
Testing avionics and military equipment often requires extensive facilities and numerous
operators working in concert. In many cases these facilities are mobile and can be set up at
remote locations. In almost all situations the equipment is loud and makes communication
between the operators difficult if not impossible. Furthermore, many sites must transmit,
receive, relay, and record telemetry signals.
To facilitate communication, most telemetry and test sites incorporate some form of
intercom system. While intercom systems themselves are a not a new concept and are
available in many forms, finding one that meets the requirements of the test community (at
a reasonable cost) can be a significant challenge. Specifically, the test director must often
communicate with several manned stations, aircraft, remote sites, and/or simultaneously
record all or some of the audio traffic. Furthermore, it is often necessary to conference all
or some of the channels (so that all those involved can fully follow the progress of the
test). The needs can be so specialized that they often demand a very expensive “custom”
solution.
This paper describes the philosophy and design of a multi-channel intercom system
specifically intended to support the needs of the telemetry and test community. It discusses
in detail how to use state-of-the-art field programmable gate arrays, relatively inexpensive
computers and digital signal processors, and some other new technologies to design a fully
digital, completely non-blocking intercom system. The system described is radically
different from conventional designs but is much more cost effective (thanks to recent
developments in programmable logic, microprocessor performance, and serial/digital
technologies). This paper presents, as an example, the conception and design of an actual
system purchased by the US government.
KEYWORDS
While upgrading its fleet of E-9A telemetry relay aircraft at Tyndall AFB, Florida, the US
Air Force recognized a requirement to replace the existing intercom system. This system
was inadequate for a number of reasons (most notably its lack of ability to monitor all
available audio sources). The Air Force expressed a desire to migrate to a digital system
for noise immunity and improved reliability. In addition to operator stations in the cockpit
and equipment bay, the new system must tie in twenty-two different radios and aircraft
guidance systems.
A little research soon proved that their was no “off-the-shelf” solution available for this
specific application. Furthermore, those companies that produced a line of digital
intercoms that could be tailored to work quoted $200,000 (and up) for a single system.
This price was significantly out of the USAF’s budget for the intercom task.
Fortunately, thanks to the emergence of inexpensive and very powerful digital signal
processors and the evolution of field programmable gate array technology, it is possible to
design and build a fully digital and completely non-blocking intercom system at a
surprisingly modest cost. This paper discusses the broad design considerations of such a
system (which we called “FlexComm II” or “FCII”).
FlexComm II (hereafter referred to as FCII) is a completely digital system, but the signals
going to and from speakers, radios, and microphones must be analog. FCII has “remote
stations” close to each analog interface. The remote station’s job is to digitize the
incoming signals and transfer them to the Central Control Unit (CCU). The CCU mixes the
signals and returns them (again, digitally) to the remote station which, in turn, converts the
digital data back into an analog signal for output. The remote station may have a keypad or
may interface with a radio.
Data is passed between the remote stations and the CCU over a 1 MB/s, point-to-point,
asynchronous bus called the “ComBus”. The CCU has one ComBus interface for each
remote station and acts as the “master” or “bus controller” for each connection. The
overall configuration is a “star” network.
The CCU itself consists of two processors: a high-speed digital signal processor (DSP)
and a general purpose computer. The DSP must do all the audio mixing and volume
adjustment. The general purpose computer communicates with the remote stations to
determine how the audio is to be mixed and scaled, and then passes the processing
information to the DSP.
Keypad data, like voice data, is passed over the ComBus and processed in the CCU. The
CCU will return commands to the remote stations (to illuminate keys or transmit on
radios). Everything is done digitally, including volume controls for the headsets and
speakers.
The CCU represents a single-point-of-failure. The best way to address this weakness it to
make the CCU dual-redundant. The second unit will be operating in a “listen-only” mode.
If it does not receive any ComBus traffic within a timeout period (say, 1 second) it will
automatically assume control and disable the ComBus drivers on the original master.
SYSTEM ARCHITECTURE
One CCU is always required. FCII may have an additional CCU which constantly
monitors ComBus traffic and can “take over” if the primary CCU fails. All other units are
optional and may be added in quantity (as appropriate). For the E-9A, the FCII system
consists of two CCUs (one primary and one redundant), four Operator/Pilot stations, three
Radio Stations (each capable of interfacing to eight radios), and four passive interfaces
(one off the wing, one off the tail, one near the jumpseat, and one next to the passenger
seats).
Operator
Station
Radio Station
Operator Central
Station Control
Unit
(CCU)
Backup CCU
Passive Station
Radio Station
Passive Station
OPERATOR/PILOT STATION
The “operator station” is a remote station with a keyboard assembly. The keyboard allows
the operator to pass mode and volume commands to the CCU. The operator station has a
single analog input (for the microphone) and two analog outputs (one for a headset and one
for a speaker). The output stages are identical, but two are provided so the headset and
speaker may have separately controlled volume levels.
Keyboard Assembly
The keyboard assembly consists of a circuit board and a set of four off-the-shelf keypads
designed for rugged applications. Each key contains two LEDs (one red and one green).
The LEDs are controlled by a microcontroller (which also reads the keystrokes). Ever
control is digital (even volume control ). The DSP (in the CCU) multiplies the digital
output signal by the selected volume
scaling factor before transmitting the Choose Spkr Hdst Set FlexComm
data back to the operator station. When Talk Vol Vol Vol
II
the data is converted to analog by the
DAC, it is already at the selected volume
level. Keeping everything digital makes NET
1
NET
2
NET
3
EMERG
microcontroller does not process the UHF UHF UHF UHF Hot
keypress data... that job is entirely up to A1 A2 G1 G2 Mic
The keyboard assembly has two National MM5486 LED Display Driver chips (each can
drive up to 33 LEDs). The drivers take a serial, 3-wire interface from the microcontroller
and latch the state of the LEDs. The chips are cascadable so only three lines are required
to control both chips. The brightness control is used to pulse-width modulate the LED
power (to allow the LEDs to be “dimmed” inside the aircraft cockpit).
The microcontroller in the keyboard assembly must run a simple program that performs the
following tasks:
28 28
Data
LED Driver LED Driver
Data Latch
Data Clock
Strobe Control
Micro-
Controller
5VAC
(Cockpit
Lighting) Pulse Width Modulation
Circuit
Rows
3
7
Seven Rows and Four Columns
These lines Go to the STACO
Keypad Row and Column Lines
Columns
4
The CCU will send commands back via the ComBus. The commands control the state of
the LEDs. The microcontroller will read each command and turn the appropriate LED or
LEDs on or off.
The three wire communications between the microcontroller and the ComBus FPGA is
controlled by the microcontroller. The three lines at DATA_ENABLE, CLOCK, and
DATA. When the microcontroller raises DATA_ENABLE, the ComBus FPGA prepares
to receive a 16 bit command from the microcontroller (to send to the CCU via the
ComBus). The microcontroller will shift each bit into the ComBus FPGA on the DATA
line (each rising edge of the CLOCK line will shift the data in). After all sixteen bits have
been shifted in, the microcontroller will continue to toggle the CLOCK line 16 more times.
During these clocks, the ComBus FPGA will shift 16 bits onto the DATA line (for the
microcontroller to read). This transfer will take place 20 times per second, whether or not
there is any “new” data to transfer. (If there is no valid data to transfer, the VALID bit in
the data word will not be set, and each processor will ignore that word).
The operator station contains a circuit board that is broken into three sections: an analog
section, a discrete section, and a ComBus section.
To
Headset XFMR
or
Radio
HDST
AMP Analog
Interface
Circuit
From
XFMR
Microphone
or
Radio
MIC
AMP
Power Amp
To 3
Speaker
Analog
Interface
Circuit
Unused
Analog
Input
Radio
TX To
Relay Central
ComBus
Control
XC4003E
Unit
Mic
Key
Detect
To Keyboard
Assembly
The analog section contains two Texas Instruments TLC320AD55C Analog Interface
Circuits (AICs). These AICs each contain one 16-bit ADC and one 16-bit DAC with built-
in filters and 64X oversampling. One AIC provides both the audio input ADC (for the
microphone or radio) and the audio output DAC (for the headset or radio). The second
AIC provides the audio output for the speaker (its input ADC is not used). The AICs each
have a three wire serial interface connected directly to the ComBus section.
The analog section also contains a microphone pre-amplifier that drives the ADC on one of
the AICs. The ADC converts the microphone input into digital data that is transferred to
the CCU over the ComBus. Microphone “gain” can be adjusted digitally by the CCU to
compensate for weak or strong microphones and variations in the different operators’
speech volumes.
The two DACs receive data from the CCU over the ComBus. The data coming back has
been mixed with all other selected signals (including sidetone from the microphone). One
DAC drives the headset amplifier; the other one drives the speaker amplifier. Keeping the
two separate allows independent digital control of both volume levels and allows the
speaker to have a different level of sidetone (to prevent feedback). The input and output
amplifier circuits have a single gain potentiometer which should only be set during system
calibration. During operation, all volume control is digitally implemented by the DSP.
Discrete Section
The discrete section is very simple and includes a digitally controlled relay used to “key” a
radio transmitter and simple circuit to detect when a headset key is pressed. The control
bits go to and come from the ComBus interface (that is, the CCU can command the relay
to close, thus “keying” the radio). The CCU also receives the state of the microphone key
over the ComBus and processes the information appropriately.
ComBus Section
RADIO STATION
Since the voice data coming from the radio is already amplified, it does not require the
microphone pre-amp circuitry on the remote station board (and that circuitry is disabled).
Both the input and output signals may optionally pass through a 600 ohm audio
transformer for noise isolation. Also, the radios do not require two output signals (there is
no speaker to drive), so the second output is left unconnected.
The remote station board includes a board-mounted reed relay that is controlled by a bit on
the ComBus. This relay, when closed, causes the radio to transmit.
PASSIVE STATION
A “passive station” is an Operator Station circuit board without a keyboard assembly. Its
purpose is to provide an easy connection to intercom traffic at strategic locations around
the aircraft. Since the Passive Stations do not have keyboard assemblies, the audio
connections can not be set by the user. Each passive station can be either preset in the
FCII configuration file, or can be set to “mirror” an operator station. Passive stations will
not have speakers.
CENTRAL CONTROL UNIT
8
Digital Signal Processor Embedded
(Intel Flash
TI 320C50 PC 486 Based) Disk
32 16 4
16 3 16
ComBus Section
(32 ComBus Ports)
1 1 1 1 1 1 1 1 1 1 1 1 ...... 1 1
Port Port Port Port Port Port Port Port Port Port Port Port Port Port
1 2 3 4 5 6 7 8 9 10 11 12 31 32
The Central Control Unit (CCU) is the heart of FCII and contains a high-speed Digital
Signal Processor (DSP) and an embedded general purpose processor. The CCU contains
one or more ComBus adapters to connect to remote stations.
Embedded PC
The embedded computer in FCII is based on an Intel 486 processor and runs embedded
MSDOS (the control software was developed and tested on a desktop PC, and then
transferred to the embedded system). The CCU uses a solid-state “flash memory” disk to
store the control program and configuration data.
The embedded computer maintains an internal table detailing how all voice signals in FCII
are to be combined and distributed. It specifically communicates with each operator station
(over the ComBus) and interprets all keypresses. For instance, if the pilot presses the NET
1 button, the computer knows that all other stations on NET 1 must now be added to the
pilots output audio. It modifies the processing table in the DSP’s memory space to enable
the additional signals.
The computer also maintains an intricate array of volume levels (since every signal going
to every station may be independently adjusted).
Embedded PC Software
The code running in the embedded PC needs to constantly monitor the keystrokes from
each Operator Station and respond to those keystrokes in an appropriate manner. Since
keystrokes will only come as fast as they can be pressed by the operator, the embedded
computer will execute the following code at a rate of twenty times per second:
The “Key_Pressed” function accesses the ComBus Adapter to see if a keypress has been
detected for the particular Operator Station. If so, the “Read_Key_Stroke” routine again
accesses the ComBus Adapter and retrieves the value of the key pressed.
Processing the key stroke (Process_Key_Stroke) is a multifaceted routine and can vary
dramatically depending upon the key that was pressed. For instance, if the “Up_Volume”
key is pressed and the station is in normal mode, that means the operator wants to increase
the volume to his headset. The computer will increment the volume factor (that the DSP
will multiple with the final output signal) and then “stuff” the new value in the DSP’s local
memory table (Update_DSP_Control_Table).
The computer must also tell the operator station how to illuminate the LEDs (in the keypad
keys). Since a single keystroke could cause many keys to change at once, it’s not practical
to try send all the updates immediately (the remote station couldn’t keep up and the other
stations would be “locked out” until all the processing was done.
Queue_LED_Command commands back to the operator station (telling it how to light each
LED). The computer sends the commands by writing them to the proper address on the
ComBus Adapter. Since this routine will often require multiple commands to be sent to the
station, the commands are “queued” and sent as time permits. The embedded PC will
repeat the above loop for each Operator Station in the system.
Since the DSP has a formidable task, care must be taken when building the main program
loop. The code is all assembly language (for optimum speed). All voice data from each
ComBus interface must first be transferred to on-chip RAM. This is a relatively slow
process, since accessing the ComBus interface requires “wait-state” memory cycles. All
volume control values are transferred from the embedded PC interface (but this process is
completed only four times per second). Once all data is in memory, the DSP can execute at
full speed (with no wait states). At the completion of calculations, the combined voice data
(for the headsets and the speakers) is moved from internal RAM back to the ComBus
interfaces.
DSP Software
Get_All_Digital_Audio;
for X=FIRST_STATION to LAST_STATION do
HEADSET_AUDIO[X]=0;
SPEAKER_AUDIO[X]=0;
for Y=FIRST_STATION to LAST_STATION do
HEADSET_AUDIO[X]=
HEADSET_AUDIO[X]+INPUT_AUDIO[Y]*VOLUME[Y];
SPEAKER_AUDIO[X]=
SPEAKER_AUDIO[X]+INPUT_AUDIO[Y]*VOLUME[Y];
end do loop;
HEADSET_AUDIO[X]=
HEADSET_AUDIO[X]*HEADSET_VOLUME[X];
SPEAKER_AUDIO[X]=
SPEAKER_AUDIO[X]*SPEAKER_VOLUME[X];
end do loop;
Put_All_Digital_Audio;
ComBus Adapter
The ComBus Adapter is a circuit board containing a four Xilinx XC4013E FPGAs. Each
XC4013 has sufficient resources to implement eight individual ComBus interfaces in a
single chip. The four chips share control, address, and data line drivers to buffer signals
between themselves and the DSP. Each ComBus board can talk communicate with up to
32 remote stations.
COMBUS DEFINITION
The “ComBus” is a 1 MHz, point-to-point, asynchronous, bit-serial data bus between each
remote station and the CCU. The ComBus section in the CCU is actually a ComBus
server/controller... that is, it has a separate ComBus port for each remote station and
controls all traffic on the ComBus. The remote stations are ComBus “slaves”. They do not
initialize any traffic, but listen for a transmission from the CCU and then immediately
respond.
• The remote station constantly monitors the ComBus for a “sync” character. When a
sync character is detected, the remote station receives 56 bits of data, broken down as
such:
Bits Description
0-15 Audio Out
16-31 Speaker Audio Out
32-47 Keyboard Command
48 Radio Transmit Control
49-55 Reserved for Expansion
• After receiving the data, the remote station will transmit a “sync” character followed by
40 bits of data, broken down as such:
Bits Description
0-15 Audio In
16-31 Keyboard Data
32 Microphone Keyed
33-39 Reserved for Expansion
The “sync” character is one and one half bit times long (low for 1/2 bit time, then high for
1/2 bit time, and then low again for 1/2 bit time). The quiescent state of the bus is low.
When the remote station detects the sync, it has enough clock information to receive the
next 56 bits. It takes 57.5 clocks to transmit data from the CCU to the remote station, and
41.5 clocks to transmit data from the remote station to the CCU. Each transfer must take
place 8000 times/second, for a total required bit rate of 792,000 bits/second. This leaves
26 bit times per transfer “turn around” time and propagation delay. At 1 MHz, 26 bit times
equals 26 usec.
Synchronization
The ComBus transfers data on a single twisted pair (differential). That means it is
asynchronous and the data “clock” must be encoded within the data itself. The ComBus
Interface (for both the CCU and remote station) must “synchronize” its own clock with the
incoming data before the data can be recovered.
After the remote station has clocked in all 56 data Wait for 4
bits, it must immediately respond by clocking out 8 successive
"low" samples
Transmission
To achieve noise immunity and reasonable distance, the ComBus should be transmitted
differentially between the CCU and remote station. Tri-stateable drivers are required since
data needs to flow in both directions on the same line (the ComBus is a “half-duplex” bus).
If you allow 5 usec for turn-around logic, the ComBus still has 21 usec for propagation
delay. If you assume a constant propagation delay of 1 nsec per foot of copper wire, the
ComBus can travel 21,000 feet before running out of time (allowing for travel in both
directions). That’s well over four miles and greatly exceeds most drivers’ capabilities. It is
reasonable to expect, then, that distance restrictions will depend upon the differential
drivers chosen to implement the ComBus.
CONCLUSION
Abstract
Shrinking budgets and labor pools have impacted our ability to perform experiments at the
Nevada Test Site (NTS) as we did previously. Specifically, we could no longer run heavy
cables to remote data acquisition sites, so we replaced the cables with RF links that were
transparent to the existing system, as well as being low-cost and easy to deploy. This
paper details how we implemented the system using mostly commercial off-the-shelf
components.
Key Words
Introduction
Changing priorities, shrinking budgets and limited resources have forced us to change the
way we conduct experiments at NTS. Once, we ran cable over ground from a centrally
located van to remote sensor modules. The cables carried power and RS-422 twisted pair
data lines. We no longer have the manpower or money for this. We needed a more
flexible, easily-deployed and reusable mechanism of data exchange, so we looked to
telemetry.
Spectrum Space
164.84375 166.65625
165.80625 167.19375
166.41875 171.21875
These are only 10 KHz “splinter” channels, so the data rate was limited to 2400 baud. Our
hard-wired system ran at 9600 baud, but it was possible to operate it at lower baud rates.
Figure 2
The RF link consisted of small, 4 watt 160-170 MfIz transceivers interfaced to packet-
switching modems. The modems were originally designed for ham radio use, but the
firmware had been enhanced by the manufacturer for industrial applications. The
technology has long been field proven by amateur radio operators, and the units are
inexpensive ($300) and readily available. These devices are called Terminal Node
Controllers (TNCs).
The TNCs handle most communication functions autonomously - they perform CRC
checking, automatic handshaking and packet resending. They can be programmed to
establish connection on power-up, and appear transparent to the RS-232 channel they are
inserted into (except for retransmit delays). This was an important feature for us. We did
not have the resources to make significant changes the existing hard-wired system to
accommodate wireless connections.
The transceivers that we used were Motorola R-NET units (see figure 4). They are small,
low-cost ($400), and low-power. Since the system is powered by solar-charged batteries,
low power consumption was important to us. For this reason, we chose Paccomm as the
TNC supplier. They offered CMOS versions of their TNCs which consumed far less
power than their competitors. The TNCs interface to the transceivers through audio
signals.
Figure 4
Figure 3
Packaging
The transceiver, TNC, battery, and solar charger are mounted in a NEMA box. The box is
mounted on a small ‘A’ frame structure made of anodized aluminum (figure 5). It
resembles a small step ladder and it can be folded up for easy transport and compact
storage. An 18 watt solar panel is mounted on the other side of the frame. A whip antenna
is installed on the top. The antenna is removable and the battery is held by a hasp that is
easily unlatched so that the battery can be removed quickly. These items are removed
when the unit is moved or stored.
Solar Power
The solar charging circuit is basically a linear regulator that is controlled by a simple state
machine, which puts the battery in bulk charge mode, then finishing mode, the float charge
mode. The solar charger was the one item that we did not purchase off-the-shelf. We
combined the RS-422/RS-232 converter circuitry, the solar charger, and a step-down
switching regulator that powered the remote data acquisition unit and put all the circuitry
on one pc board. The charger also has a low-voltage disconnect to keep the battery from
being over-discharged by disconnecting the load if the battery voltage drops below a preset
voltage.
There are plety of sunny days in the Nevada desert, but the first time we fielded the
system, it was unusually cloudy for several days. The batteries ran out at the end of the
third day. It seems that battery power is always less than the calculated amount. We are
currently working on a more efficient switching regulator design for the battery charger.
Where there is lots of sun, there is lots of heat. We had to paint the NEMA boxes with
reflective paint to reduce the heat in the box. Battery life is shortened by heat. The batter
charger must respond to changes in temperature and reduce the charge termination voltage
or the batteries will be “cooked”. Also, solar output voltage drops as temperature
increases.
The battery holder was mechanically designed so that the battery could be removed easily.
After the experiment was over, we collected the batteries and stored them in a shelter with
a “smart” battery charger to keep them at a float charge. Although the batteries are sealed,
they are lead-acid, and regulations dictate that we keep all lead-acid batteries in an open
shelter so that hydrogen buildup can not occur. So we were kicked out of our
air-conditioned warehouse and had to settle for a three-sided shelter. Now we will need to
get a charger that can sense temperature, and adjust the float voltage accordingly. Our
temporary solution is to lower the float voltage so that high temperatures will not cause the
batteries to dry out.
We were able to test our configuration of the TNCs before we had our RF transceivers and
licenses by coupling the audio input of one TNC to the audio output of the other, and
turning up the modulation adjustment so that we had plenty of audio signal. Then we tied
them to two laptop computers and simulated the data exchange between a remote data
acquisition unit and the central logging computer.
When the transceivers were connected to the system, we used an old Cushman
communication monitor to set the TNC modulation levels. This instrument was very
helpful in observing the activity of the communication link between remote units and the
base station. Our ears learned to recognize the demodulated audio signatures of an
attempted startup of a link, a retry, etc.
Several problems arose when we fielded the system. First, the solar chargers would burn
out when there was full sunlight and the local transmitters were being tuned up. They did
not burn up if there was low light, or if the transmissions were short bursts. We suspected
that RF energy was being absorbed by the solar panels and somehow causing the
maximum voltage level of the battery charger chip to be exceeded. We installed RF
filtering on the solar input lines to the charger circuit, and the problem has not recurred.
Secondly, we had problems with the software configuration of the TNCs. These devices
have dozens of parameters that can be modified, and programming errors can cause
puzzling results. In our case, the devices were dropping out of transparent mode. The
result was that certain control characters, such as line feeds, were being filtered out of the
data stream. This caused problems for our existing system software. This sounds like a
simple problem, but it led us in circles for a long time, since we suspected dropped
characters, buffer overruns, modulation level problems, etc.
Thirdly, we had adjacent channel interference. Although we always sent commands to the
remote units sequentially so that the remote units would reply sequentially and avoid
colliding with each other, the TNCs perform automatic retry on error, and the retry from
one unit might occur when another unit is transmitting. The problem results from lack of
spacing between the antennae at the van. We investigated the solution that repeater
stations employ: resonators. At the frequencies we are using, the resonators are huge tanks
that would scarcely fit into our van, much less stay in tune while traversing the rough roads
of the desert. We alleviated the problem by intelligent deployment - the remote units that
were closest in frequency were placed farthest apart, so that the corresponding antennae at
the van were pointing 180 degrees apart. The three-element beams used at the van have a
17 dB gain differential, front to back. Ultimately, we will have to work out a scheme for
moving the antennae to different masts, but we have yet to come up with one that does not
impact ease of deployment.
Unlike the heavy, metal-jacketed cables that we use in the hard-wired systems, the light
weight wires from the solar panel to the NEMA box on the remote unit can not be run near
the ground. Rodents will gnaw through them.
Crows and solar panels do not go together. The crows like to sit on the top of the A frame
of the remote units and soil the solar panels. We alleviated the problems by drilling small
holes along the upper solar panel support bar, and putting tie-wraps through the holes. The
long tails of the tie-wraps were not trimmed, but were allowed to protrude out at a 45
degree angle along the top of the A frame. A crow would attempt to land on one of these,
which would bend under its weight, and it would fly away (sometimes).
Conclusion
Dwight M. Peterson
Naval Warfare Assessment Division
Instrumentation Systems
Corona, CA 91718-5000
ABSTRACT
The assessment of weapons and combat system performance requires the collection and
networking of data from shipboard and land based locations. New programs being
introduced and tested, such as the Cooperative Engagement Capability, Theater Ballistic
Missile Defense, and All Service Combat Identification Evaluation Team, generate
gigabytes of data which must be reduced, transferred, and analyzed. Test conductors,
headquarters personnel, and military commanders require analysis results in near real time
to evaluate system performance during a test or exercise. This paper will discuss
communication system applications for shipboard data collection and networking to
collect, reduce, and transfer the large amounts of data generated during current and
planned Navy and Joint exercises. Examples of using 56 Kbit/Second International
Maritime Satellite, range based line-of-sight networking, and integrated workstation
applications will be addressed and lessons learned shared from actual installation and use.
INTRODUCTION
Naval Warfare Assessment Division (NWAD) is tasked as the analysis agent for all Navy
tactical missiles and the AEGIS combat system. Analysis agent responsibilities include
providing performance assessment to program managers and the Fleet during all
weapons/combat system life cycle phases. Key areas of concentration include developing
test plan scenarios, data management plans, providing on-site analysis during Development
Testing (DT)/Operational Testing (OT), and providing in-depth analysis of DT/OT test
results. Information obtained from this analysis is used to insure test plan objectives are
met and deficiencies are corrected during missile/combat systems development,
production, and in-service phases.
Acquiring test data from ships at sea is an integral part of the analysis process.
Traditionally, Time Space Position Information (TSPI), missile/target telemetry, and
weapons/combat system data sources are recorded on-range and aboard ship. NWAD
analysts are present on-range or aboard ship during Test and Evaluation (T&E) or Fleet
training exercises to perform a “quick look” assessment of exercise results. “Quick look”
analysis results are provided to exercise participants within 24 hours after each test via
Naval message. A final report is provided within 4-8 weeks after a test which reflects the
results of detailed analysis.
The timeframe for providing analysis results of modern weapon/combat systems has
dramatically decreased over the last few years. Program managers and Fleet commanders
require “quick look” assessment results immediately after a test or exercise and a final
report is needed in days vice weeks. NWAD has responded to these challenges by
developing analytical tools and using communication technology to improve our analysis
processes.
ANALYTICAL TOOLS
Three analytical tools have been developed to meet the demands AEGIS combat system,
air/surface missile system, and battlegroup exercise reconstruction. These tools provide the
analyst an ability to assess surveillance, C4I, and missile firing performance. They are
designed to operate on Silicon Graphics and TAC-3/4 workstation hardware.
COMMUNICATION TECHNOLOGY
INMARSAT
Since 1992, NWAD has been developing a wireless LAN technology in the 900 Mhz band
to provide secure, Line-of-Sight (LOS) connectivity from ship-to-shore at date rates of 448
kbps or higher. This capability was originally developed and fielded in support of the
Theater Air Defense (TAD) program for the Self-Defense Test Ship (SDTS). The SDTS
operates near San Nicolas Island on the Southern California range. The coverage area for
the SDTS LOS system is shown in figure 1.
During the summer of 1995, the LOS wireless LAN technology was used again to support
the All-Service Combat Identification Evaluation Team. Once per annum, the All-Service
Combat Identification Evaluation Team, located at Eglin AFB, FL, issues a call to the four
services to participate in a two-week joint exercise. The primary goal of the exercise is to
evaluate tactics, techniques, and procedures associated with combat identification in order
to eliminate blue-on-blue engagements. These include surface-to surface, surface-to-air,
air-to-air, and air-to-surface engagements. Secondary results of the exercise include joint
training opportunities, data link evaluation, evaluation of new combat systems and
equipment, and development of improved joint operation techniques. The AEGIS Program
Office took a proactive role in planning for ASCIET 95. As the AEGIS Air Defense
Analysis Agent, NWAD was selected by the AEGIS Program Office to develop and
implement a plan to support the two AEGIS ships participating in the exercise. A team was
assembled at NWAD, drawing from existing expertise in combat systems analysis and
information management, to support collection and analysis of the AEGIS combat system
data. Existing data collection and analysis processes were improved to accomplish the
following in support of ASCIET 95:
• developing a new process to integrate Fleet combat units, both surface and air, into a
shore based information network;
• high speed, data networking between Eglin AFB, Camp Shelby, MS and Gulfport, MS
for data transfer and debrief;
• developing a common, integrated display tool to debrief all exercise and joint service
participants.
ASCIET 95 provided an opportunity for individual units from each service to train together
in a joint littoral-like exercise. Unlike current training exercises, ASCIET 95 test
objectives required that all participants talk together during each mission, to integrate
individual unit tactical pictures so all participants can see the “big picture”, and to provide
a rapid debrief capability for all exercise participants. The scope and magnitude of this
exercise involved state-of-the-art systems, equipment, and personnel from all services.
These systems, equipment, and personnel were required to have face-to-face interactions
during large scale missions conducted twice daily on an exercise area comprising
thousands of miles over land, air, and sea. ASCIET 95 was conducted on training ranges in
and around the Gulf of Mexico and on training ranges in Florida and Mississippi.
Previous year data collection and analysis efforts involved the physical transfer of
shipboard and land based data to a central site for processing and analysis. On-site
feedback and analysis was performed on an overnight and multi-day effort with final
analysis results documented in a report published weeks after the exercise. These
traditional analysis result reporting methods did not meet the ASCIET 95 requirement for a
rapid debrief capability after each mission to show how the mission events actually
happened. The tactical displays from each participating unit needed to be integrated
together to permit evaluation by the exercise coordinator. New technology innovations
were developed to allow for (1) the integration of independent tactical displays from each
service located hundreds of miles apart, (2) a common networking architecture to conduct
live video/teleconferencing between all exercise participants, and (3) the ability to collect
and relay data throughout the theater to debrief what happened during the mission.
During ASCIET 95, emerging technologies within the Navy’s Cooperative Engagement
Capability (CEC), Joint Tactical Information Distribution System (JTIDS), and Situational
Awareness Beacon with Reply (SABER) were tested in addition to existing deployed joint
service systems. The massive amounts of data generated by these systems combined with
the need for rapid feedback and analysis within hours, instead of days after an exercise,
called for radical changes in these processes. NWAD met this challenge and provided near
real time data collection, reduction, data transfer, and debriefing capabilities which
successfully met and exceeded the ASCIET 95 test objectives.
For ASCIET 95, the LOS communication capability was further developed for the 2.4 Ghz
band and fielded in the Gulf of Mexico. Various relay towers were instrumented along
with the two AEGIS ships participating in the exercise. Network coverage is shown in
figure 2.
Tillmans
Tillmans Corner
Corner
Theodore
Theodore
Bauxite CRTC Grand
Grand Bay
Bay
Mound
Mobile Bay
D'iberville
D'iberville Ocean
Ocean Springs
Springs Moss
Moss Point
Point Bayou
Bayou La
La Batre
Batre
Gulfport
Gulfport
Long Beach
Long Beach Biloxi
Biloxi Pascagoula
Pascagoula
Pass
Pass Christian
Christian Mississippi Sound Bon Secour Bay
Henderson Point
Henderson Point
Horn Island
Cat Island Ship Island R6 Petit Bois Island 25 Mile LOS
Range to R6
ACTS 4xT1
R7
25 Mile LOS
R5 New T1 Range to R3
Link
R1
Master
Chandeleur Sound Tower
Chandeleur Islands R2
R4 Breton National Wildlife Refuge
R3
Relay Tower
Breton Islands
©
© 1993
1993 DeLorme
DeLorme Mapping
Mapping
Figure 2 - ASCIET Debrief Network
LOS Communication System Coverage Map
ASCIET 96 SUPPORT
Secure data communications network connections will be provided to eleven sites which
will participate in ASCIET 96. Secure network connections will be provided between four
sites at the Gulfport, MS Combat Readiness Training Center (CRTC), two AEGIS cruisers
(USS Anzio (CG 66) and USS Cape St. George (CG 71)), and the following five facilities;
a) Camp Shelby, MS, b) the Navy Construction Battalion Center (CBC) in Gulfport MS,
c) Eglin AFB, FL, d) Fort Bliss, TX, and e) Tinker AFB, OK. The two AEGIS cruisers
will operate off-shore in an area bounded by two 25 mile Line-of-Sight (LOS) radii from
the Gulfport R6 and R3 relay towers as shown in figure 2. The purpose of the secure data
communications network is to support event preparations, real-time operations, and debrief
during the entire test evolution. The following functionality will be supported.
Multiple terrestrial T1 circuits will be ‘hubbed’ from the Gulfport site. The T1’s will be
encrypted using KG-194 hardware. Ships will be connected using wireless LAN radio
transceivers installed on ACMI ocean relay towers. These units will provide
communication at data rates of 672 kbps or greater to each ship within 25 miles of each
tower. The Gulfport Bldg 154 Connectivity Diagram is shown in figure 3. The CG-71
Connectivity Diagram is shown in figure 4.
ADNET T1 to KG-
NIU CSU TPWS
Multiport Router
P/S HNF
Admin
PC
ADNET T1 to KG-
NIU CSU
Tinker AFB 194 Chat
Hub
Server
F/O F/O KG-
FP2 (Bldg 4-100) Patch Modem 194
WAM
SGI
HNF UPS
Handset
PSTN Adapter
TAC-3
STU-III
Bridge
Audio
STU-III
ADNET
Handset
PSTN Adapter
Voice Coord Briefing
Station
STU-III Area UPS Audio-Video System
WSMR
MC Van
CIC Space
WAM/VTC Pesa Video Ampro
LSD
Xcvr CRT Switch Matrix 3600
Video Split
CEC
UPS
mic
Xcvr
2:1 Switch
A/V speaker
Multiport Patch
UPS NES CDD TAC-4 TAC-4 CEC CEC CEC
Router VTC camera
'Q-70' 'Q-70' Monitor Monitor Monitor
CEC
UPS W/S
Router
Hub
Hub
CEC
W/S
Forward WAM/
Router
Computer Room SGI CEC
W/S
Display Net
Peripheral Net
CEP Space
DDS CEP
Reliable File Transfer - Reliable file transfers must also be supported during ASCIET-96.
This functionality will be provided on a point-to-point basis by the computers connected to
the ASCIET network via Ethernet using the reliable File Transfer Protocol (FTP). This
utility will be used immediately following each mission event to relay Command and
Decision (C&D) track data from the AEGIS cruisers over the wireless LAN to the CRTC
at Gulfport, MS. At times where bandwidth is not a limiting factor (e.g. during pre-mission
and post-debrief time frames), multiple simultaneous FTP sessions among all sites can be
supported. All resources which utilize this functionality use the Transport Control Protocol
(TCP)/ Internet Protocol (IP) connection-oriented communication mechanism which
assures errorless file transfers over the ASCIET network.
Kill Removal - ACMI/RAJPO real-time kill removal will be aided by two Cooperative
Engagement Capability (CEC) workstations in Bldg 100 at Gulfport. The connection
between the CEC LAN on either AEGIS ship and these workstations will be made using
the CEC Data Distribution System (DDS) and, as a back-up, the wireless LAN system.
CEC data relay requires approximately 128-384 kbps throughput, well within that
provided by the wireless LAN. ACMI data will be extracted by the CEC workstations in
real-time and integrated with CEC data into a single composite picture for display to the
CRTC-based kill removal operator. During non-CEC mode, kill removal assistance will be
accomplished via the full-duplex audio capability of the VTC over the wireless LAN. This
service will allow tactical display operators on the AEGIS cruisers to confirm/deny kill
removal decisions via voice to the CRTC-based kill removal operator in Gulfport, MS.
Computer Data Debrief (CDD) - The composite picture of all mission activity which is
gathered at the CRTC in Gulfport is required to be shared simultaneously with all
participating sites within three hours of the completion of each mission. One morning
presentation and one afternoon presentation are scheduled daily during the ten day
exercise. This Computer Data Debrief (CDD) functionality is depicted in Figure 3. Along
with CDD, full-duplex, multipoint audio must be provided to allow those being debriefed
to fully interact with the person conducting the debrief. This functionality will be
accomplished by using an X-Windows client/server set-up, where an SGI X-client
(executing the Warfare Assessment Model (WAM Debrief software) and a PC running X-
server software for Windows 95 at the CRTC shore site). This CRTC-based PC will also
be network linked to multiple similarly configured PC’s at all remote shore sites and the
ships. The PCs will be running Remotely Possible screen sharing application software.
This product interposes itself between Windows 95 applications and the
Kybd/Mouse/Display drivers and converts the information into IP packets that are sent
over multiple TCP/IP connections between the host and remote PC’s. This method
provides images that are clearer than a video codec conversion, takes less bandwidth, and
TCP/IP transport assures errorless transmission to all remote sites. Full-duplex, multipoint
audio will be provided by the VTC facility in an audio-only mode. Anticipated data rate
for CDD and audio functions is less than 512 kbps.
Video Teleconferencing (VTC) - Motion video with full-duplex multipoint audio between
single and multiple sites is required to support interactive coordination and debrief for
personnel during non-operations time periods. The method used to provide this
functionality is based on the Communique! for Windows 95 VTC software product for
PC’s. When coupled with a digital video capture board and a full duplex sound card, full
VTC functionality is achieved using PC’s. Communique! uses the UDP/IP connectionless
communications method and combines received audio packets from all sites to provide full
duplex, multipoint audio (i.e. a party line). The digital video capture cards use the Indeo
compression algorithm, at user specified quality factors, and provide direct image sizes of
320x240, 240x180, and 160x120 pixels at varying frame rates as the bandwidth allows.
Pixel doubling can also be enabled within Communique! to provide full-screen image sizes
of 640x480 pixels. Anticipated data rate for VTC and audio functions is less than 672
kbps.
Pacific Missile Range Facility (PMRF)/Atlantic Fleet Weapons Training Facility (AFWTF)
LOS networks are also planned for installation at PMRF and AFWTF to support AEGIS
Combat Ship System Qualification Trial (CSSQT) testing. Projected LOS coverage for
PMRF and AFWTF are shown in figures 5 and 6 respectively. LOS networking will permit
the transfer of combat system data from on-range AEGIS ships immediately after the
conclusion of a CSSQT test event. This will reduce the time required to provide analysis
feedback to range, program office, and Fleet personnel during a CSSQT. It will also
provide the file transfer and VTC capabilities to participating on range ships.
The quantum level increase in the amount of data required to analyze the performance of
modern weapons and combat systems provides considerable challenges on our existing
range instrumentation and analysis methods. This paper has discussed initiatives
implemented within the Naval Warfare Assessment Division to improve our existing data
collection, processing, and analysis processes. I would like to acknowledge the many
talented and resourceful individuals within our Command who take state-of-the-art
technologies of the future and provide them to solve today’s problems. I look forward to
continued success as we help program office and Fleet personnel meet the challenges of
tomorrow.
ABSTRACT
Several different kinds of signal are transmitted through only one carrier in microwave
unified telemetry tracking & control system(MUTTCS), which has replaced separate
system to accomplish all TT&C functions, and has been widely used now. This paper
analyses the advantages and disadvantages of general subcarrier frequency-division
MUTTCS, then the principle and performances of advanced spread spectrum
MUTTCS(SS-MUTTCS) are discussed in detail. The inherent ranging ability of PN code
and the speciality of spread spectrum modulation realize the complete unification including
measurement of range, velocity and angle as well as telemetry, telecontrol and
communication functions. At the same time, the contradiction between range and velocity
measurement in precision, resolution and measuring range can be solved. With CDMA
technology, the signal and equipment of multi-target or multi-station TT&C can be unified
easily. SS-MUTTCS operates under low S/N, low threshold, low power spectrum density
and wide spectrum range, so it meet the requirements of electronic warfare and ECM, with
high performance of safety, security, anti-intercept and anti-interference. Therefore, SS-
MUTTCS is becoming an important trend of modern vehicle TT&C system.
KEY WORDS
INTRODUCTION
Vehicle Telemetry Tracking & Control(TT&C) technology develops very rapidly to meet
the requirements of modern complex flight tests. The objectives of TT&C system are to
measure internal parameters and external trajectory of vehicles(such as missile, satellite,
unmanned plane, etc.) in real-time; and to process the telemetering, range, angle and
doppler data; hence to correct the flight trajectory and control internal equipments. TT&C
system is a comprehensive electronic system, consisting of three parts: telemetering,
positioning & tracking and telecontrol, which are separate and independent with each other
in the past, with different radio frequencies and equipments. Separate TT&C system has
large size, bad electromagnetic compatibility, vulnerability to jamming, so it is seldom
used now. Since Unified S-Band system was employed in Apollo project, microwave
unified TT&C system(MUTTCS) has been used widely.
Spread spectrum(SS) technology was used in military and secret communication in 1970s,
but it is not used more widely because of its complexity and restriction by electronic
devices in that time. In the end of 1980s, due to its advantages in electronic war(EW) and
electronic countermeasure(ECM), a new era of SS technology is coming. The
developments of VLSI, low power transmitting and correlation despreading techniques
make the achievement of SS system with high performance and small size be possible.
Code-division multiple access(CDMA) is becoming an important trend in the next
generation of mobile communication. Pseudo noise(PN) ranging technique is used widely,
GPS and TDRSS are its typical examples. In the following, we'll discuss the principle and
performance of a new kind of MUTTCS--Spread spectrum MUTTCS(SS-MUTTCS). At
first, we introduce the constitution and limitations of general subcarrier frequency-division
MUTTCS.
TC instr. mod.
fu1 up unified signal
fu
up data mod. + carr. mod&trans
fu2
interrogating mod.
signal TT&C ground station
servo
fu3
R&V fu3 BPF
terminal dem. fd
track
TM terminal dem. fd1 BPF PLL rec&dem.
term.
down data
terminal dem. fd2 BPF
Down-link procedure is similiar to up-link, but it's more complicated. At first, the received
signal is demodulated, then it is decomposed by BPFs. Telemetry and down-
communication data are sent to corresponding terminals, respectively. Responding and
interrogating signals are processed in range and velocity measurement terminal, where
doppler shift(from carrier or subcarrier) is estimated to calculate velocity, and time delay is
estimated to calculate distance(doppler shift should be compensated before time delay
estimation). The down-link carrier is used for antenna tracking and angle measurement
also.
In Fig.2, PN3(PNd3 & PNu3)is used only for range measurement. Actually, it can be
beared by data. Especially, if only the three functions of ranging, air-ground telemetering
and ground-air telecontrol are needed, the telecontrol data is spreaded by PN3 firstly, then
the data-bearing PN3 signal is taken as up interrogating signal; On vehicle, local
synchronized PN3 code is established at first, then it is used for despreading telecontrol
data. At the same time, telemetry data is spreaded by the synchronized PN3 code and the
data-bearing PN3 signal is taken as down responding signal. So only one PN code is
needed to realize the unification of three functions, without multi-access interference.
In ground station, doppler shift in carrier or synchronized PN code clock in ground station
is estimated to determine velocity. Owing to the doppler sensitivity of PN code, the local
synchronized PN code clock shoud be compensated by doppler shift to measure range
exactly.
In SS-MUTTCS, as a result of low power density of the unified signal, wideband analog
image signal can be added with TT&C unified signal to produce a compound signal(CS):
Sdc(t)=K1Sd(t)+K2Sa(t) (3)
where Sd(t) is TT&C unified signal, Sa(t) is image signal, Sdc(t) is down compound
signal.
on-vehicle system
TM data SS&PSK
down unified sig. fd
WB up-convert
PNd1
+
down data SS&PSK
PSK local PNu3
PNd2
interrog. sig. fu
TC term. PSK dem. PNu1 desp. fu
WB down-convert
up data
term. PSK dem. PNu2 desp.
TC instr. SS&PSK fd
PNu1 up unified sig.
fu
up data SS&PSK + WB up-convert
PNu2
PNu3 PSK TT&C ground station servo
interrog. sig.
R&V
terminal local PNu3 fd
When the compound signal is received by ground station, because of high process gain, the
TT&C signals can still be extracted certainly. The extracted TT&C signals are spreaded by
PN codes again, then subtracted from the compound signal, so that the analog image signal
is recovered. Thus SS-MUTTCS realizes the unification of TT&C and wideband image
transmission, which is difficult to achieve in general MUTTCS.
Since there is no carrier composition in balanced PSK signal, down compound signal is
taken as tracking signal to measure angle. Ordinarily, baseline-variable interferometer with
two unit antenna is used, which has the functions of both searching and precision tracking.
From analysis above, we can see that PN code radar, CDMA and spread spectrum
communication are the key techniques of SS-MUTTCS.
PERFORMANCE OF PN RADAR
PN code radar is a very important part in SS-MUTTCS, which implement range, velocity
and angle measurement & tracking. According to maximum likelihood estimatation theory,
the theoretical accuracy of time delay estimation (Cramer Rao limit of rms error) is
1 1
στ =
2E 2
(4)
N0
W γ
1−
W 2T 2
the theoretical accuracy of doppler shift estimation is
1 1
σξ =
2E T γ2
(5)
N0 1−
W 2T 2
where W is rms bandwidth, T is rms time duration,is time FM constant, (E/N0) is signal to
noise ratio.
For general radar signal, such as rectangular pulse, Gaussian pulse, the product of rms
bandwidth W and rms time duration T is
WT ≈ π (7)
So there is an uncertain relationship between range and velocity measurement, it means
that we cann't obtain high precision of both range and velocity simultaneously.
But the situation is different for PN code, assuming its chip duration is Tc and period is N,
so
W=1/Tc, T=NTc, (=0
The product of rms bandwidth and time duration WT=No 1, so high accuracy of both
range and velocity can be obtained simutaneously. Different values of Tc and N can be
selected to meet different precision requirements for range and velocity, respectively.
According to target resolution theory, the closer the ambiguity function on J axis to * (t)
function,the higher the range resolution; the closer the ambiguity function on > axis to * (t)
function, the higher the velocity resolution.Because the rms time duration T and bandwidth
W of PN code are very large, its ambiguity is close to * (t) function in both time and
frequency coordination, with high range and velocity resolution.
PN radar can operate in responding mode with continuous signal or reflecting mode with
pulsed signal. In the latter, due to the difficulty of isolating transmitting and receiving
signal, which need two independent antenna in continuous waveform, pulsed PN radar is
superior.In SS-MUTTCS, continuous waveform PN radar is selected because the system
operate in responding mode in most cases.
If the distances from target to three different points are obtained, we can locate the target's
position. This is the typitical example of multistation system. Furthermore, multi-station
may be needed to implement diversity reception in wait-receiving system. Multitarget
TT&C is an important subject of modern TT&C, which hasn't found a good solution.
Generally FDMA and TDMA technology are used to realize multi-station or multitarget
TT&C. Recent research and practice show that CDMA technique can realize this kind of
system more easily, with larger capacity, good unification performance. The diagram of
CDMA multi-station and multi-target unified TT&C system are shown in Fig.4 & 5.
Orthogonality and autocorrelation features of PN code are the basis of CDMA technique.
The unified signal of CDMA system is
n
S ( t ) = ∑ Ak d k ( t − τ k ) PNk ( t − τ k ) cos( 2 πft + ϕ k ) (9)
k =1
A local PN code synchronized with PNk in received signal must be obtained at first, which
is compared with the transmitted signal to estimate range and velocity, then multiply and
integrate with S(t) to make correlation operation and decision, at last dk is obtained.
Because the orthogonality of PN code isn't ideal completely, there exists multiaccess
interference. While CDMA unified signal S(t) is correlated with PN1, assume J 1=0,
N 1=0
(
10)
The first item is the desired signal, while the second is multiaccess interference, and the
third is noise, Multiaccess interference is the primary factor to influence system capcity,
which is determined by input sensitivity, spread spectrum process gain and bit error rate
requirement.
As for noise and jamming, the relationship between (S/N)i ratio at despreader input and
(S/N)o ratio at despreader output is
( S / N )o
(S / N )
= Gp (12)
i
The energy of SS unified signal distributes in a very wide band, so its power density is
reduced by Gp times in the condition of equal power with general TT&C. The unified
signal is buried in noise with good safety. The acquisition and tracking of PN code is the
basis to despread SS unified signal. As the number of selectable PN code is very large, if
we have no idea about chip duration Tc and period N of PN code, the detection and
estimation of unknown PN code is almost impossible. So in SS-MUTTCS, the
decomposition of unified signal is a very complicated procedure, with high anti-intercept
ability, which depends on Gp, CDMA PN address coding and unified signal design.
CONCLUSION
From the above analysis, we see: with PN radar and spread spectrum communication
technology, a new kind of unified TT&C system(SS-MUTTCS) is achieved, which can
realize the unification of range, velocity and angle measurement as well as telemetry,
telecontrol, digital communication and wideband analog image communication. The
unification of multitarget and multistation TT&C can be realized too. The fundament of
unification is to ultilize the thorogonality of PN code to design a CDMA system. Not only
good unification of signal and equipment is achieved in SS-MUTTCS, but a similiar 'noise'
radar with high precision(resolution) of both range and velocity can be obtained also.
Furthermore, SS-MUTTCS has good performances of anti-interference and anti-intercept
to meet the requirements of EW and ECM. Spread spectrum is an essential technology of
modern military electronic systems. As a result SS-MUTTCS will represent an important
trend of vehicle TT&C system undoutedly.
Many technical problems are existing in SS-MUTTCS still. A PN sequence of high chip-
rate and long period is required for long distance, high precision and resolution
measurement, but the fast acquisition of long sequence with low S/N ratio is still a difficult
problem. Large capacity telemetry and communication are contradicted to large SS process
gain. CDMA access number is limited by interference threshold and sensitivity of receiver.
Now parallel correlation and match filtering technique is being studied to realize fast
acquisition of PN code. Further research of adaptive SS process gain control and advanced
receiving with low threshold under low S/N ratio are making great progress. SS system is
expected to be used in many fields.
We would like to thank Professor Yang Shizhong, who is the director of Communication
and TT&C Institute of Chongqing University, for his much help.
REFERENCE
John Kilmer
Instrumentation Development Directorate
White Sands Missile Range, New Mexico
Abstract
The White Sands Missile Range (WSMR) Water Supply Control System (WSCS) controls
and monitors the water wells, tanks and booster pumps located at the southern end of the
missile range. Figure 1 is an overview of the WSMR water supply system. The WSCS
provides water for approximately 90 square miles of the 3,700 square mile missile range.
The WSCS was designed and installed in 1990 and in need of upgrading and repair. The
system was evaluated and found to be only moderately functional.
The WSCS consists of an IBM compatible personal computer (PC) based user interface,
located at the WSMR Water Plant and Fire Dept. and industrial-type computers called
Programmable Logic Controller (PLC) based stations at the Water Plant, water wells and
tanks. The stations communicate over a 400 MHz radio half-duplex link. The serial
message utilizes the Cyclic Redundancy Check (CRC) and Block Check Character (BBC)
type of error checking. The Master station controls pumping by downloading pump
settings to the slave stations. The slave stations upload data to the master such as tank
level, pump status, energy usage, gallons of water pumped and various alarms.
The system was analyzed and the design was found to be sound. The system did require
improvements. These improvements include adding surge suppressors, software upgrades,
absolute reading flow rate sensors, and providing adequate environmental cooling for the
control system.
Procedures for periodic maintenance and calibration of the sensors and schedules for radio
equipment maintenance were also developed.
Software modifications to reduce WSMR energy usage by reducing pumping during peak
energy demand times are being integrated into the WSCS. The peak energy demand times
are determined by historical energy usage data.
KEY WORDS
Water Supply Control System.
INTRODUCTION
The Instrumentation Development Directorate (IDD), repaired and upgraded the WSCS by
re-engineering the system, identifying and replacing non-functioning hardware and making
improvements to the software. IDD also produced software and maintenance
documentation.
DESCRIPTION OF SYSTEM
The WSCS collects data about the water supply system and displays this information on
the PC. The WSCS uses sensors which measure the Kilowatt hours (KWH) used and
gallons pumped by the pumps. The system also uses pressure sensors which measure the
tank levels and the depth of the water table in the wells which is called the well
drawdown. The elapsed time that each pump has been operated is measured by the PC.
The data described above is also saved to the hard disk on the PC. The WSCS also has a
remote display at the WSMR Fire Dept. for monitoring alarms after regular working hours.
The WSCS also monitors several safety sensors and reports alarms to the operator on the
PC. These alarms are also printed out and saved on the hard drive. The alarms generated
by the WSCS include a "no communication" alarm which informs the operator the radio
link to a particular site is non-functioning. Another alarm is the "pump outlet pressure"
alarm which occurs when the pump outlet pressure is beyond safe limits. The pump outlet
pressure alarm is also used by the local slave PLC to switch the pump off to prevent
damage to the pump. Other alarms include tank alarms, pump bearing temperature alarms,
PLC cabinet door open alarms, PLC cabinet high temperature alarms, radio battery low
alarms and Chlorine leak alarms.
The WSCS monitors and displays data on the PC screen in graphical as well as numerical
format. The main screen is a summary of all alarms which gives a quick overview of the
system status. The lower level screens give more detailed information about a particular
well or tank. The tank and well drawdown levels are shown in bar-graph form and an
alarm level is indicated by flashing red bar. These levels are also displayed numerically as
a percentage of full. Other numerically displayed data include pump outlet pressure, pump
outlet flow rate in gallons per minute (GPM), total gallons pumped and total KWH used.
The pump status is displayed graphically by changing the pump symbol to green for pump
running, red for pump off and flashing yellow for pump failure. Other alarms are displayed
by using flashing text.
RADIO COMMUNICATION SYSTEM
The WSCS communication system consists of UHF FM radios and radio modems to
interface the radios to the PLC computers. The PLC computers use networking protocol
with the base station as the network master and all other sites as slaves. The master polls
each site for data. Any slave site which does not respond is made a lower priority by the
master and polled less often.
These polling functions require very little main processor control to operate. The
programmer needs only to specify the data addresses at the slave and master PLC and set a
bit to start the data transfer. The program must then monitor another bit to determine when
the data transfer has been completed by the PLC communication module. When the data
transfer is completed, The program can then start another transfer. All error checking is
handled automatically in the PLC communication module. Any data transfers with errors
are discarded and the slave site is polled again.
HARDWARE IMPROVEMENTS
Hardware improvements to the system include adding surge suppression and providing
adequate cooling to the computer equipment. Also, an analog output capability was added
to existing water flow rate meters (FRM). Older versions had only a digital pulse output
which pulsed once for a factory preset number of gallons. The flow rate was calculated by
count the number of pulses for 3 minutes and then dividing by 3 to calculate the flow rate
in GPM. In some cases the error using this method was as much as 1000 GPM. The new
FRMs also output a 4-20 mA signal proportional to the flow rate. This analog output
provides an accurate and instantaneous reading of the flow rate.
Several software upgrades and improvements are currently being performed on the WSCS.
The upgrades include adding audio capability to the WSCS. The audio capability allows
the playback of prerecorded voice messages over regular phone lines. The PC can
automatically call personnel over regular phone lines and deliver alarm information in
human speech. The PC will also have the capability of paging personnel. Also, personnel
may call the WSCS and receive preselected information such as tank levels.
Energy saving improvements are also being added to the WSCS to reduce the amount of
pumping during peak energy usage times. This will be accomplished by allowing tank
levels to drop during peak demand times and pumping mostly at night. Current plans are to
use a time-of-day based approach to reduce pumping. Future improvements will include
monitoring WSMR main post electrical substation for energy usage and reduce pumping
based on this information.
CONCLUSION
The WSCS is a very useful tool in collecting and storing very data from many different
well and tank locations. This data can be accessed and analyzed by Water Plant operators
at the Water Plant main office. The operators can also monitor and quickly respond to
alarm conditions by using the WSCS.
ADVANCED RANGE TELEMETRY (ARTM)
ABSTRACT
At open air test and training ranges, telemetry is beset by two opposing forces. One is the
inexorable demand to deliver more information to users who must make decisions in ever
shorter time frames. The other is the reduced availability of radio frequency spectrum,
driven by its increased economic value to society as a whole. ARTM is planned to assure
that test and training programs of the next several decades can meet their data quantity and
quality objectives in the faces of these challenges. ARTM expects to improve the
efficiency of spectrum usage by changing historical methods of acquiring telemetry data
and transmitting it from systems under test to range customers. The program is initiating
advances in coding, compression, data channel assignment, and modulation. Due to the
strong interactions of these four dimensions, the effort is integrated in a single focused
program. In that these are problems which are common throughout the test and training
community, ARTM is a tri-service program embodying the DoD’s Common Test and
Training Range Architecture and Reliance principles in its management and organization.
This paper will discuss the driving forces, the initial study areas, the organizational
structure, and the program goals.
KEYWORDS
INTRODUCTION
There is a new initiative within the Department of Defense (DoD) that promises to make
fundamental changes in the way that test and training programs requiring radio frequency
telemetry are carried out. This program, Advanced Range Telemetry (ARTM), is at the
beginning of a ten year program cycle. Because of its importance to the community, we are
making an initial report and plan to continue to communicate our progress through the
whole cycle. This paper discusses the causative factors behind ARTM, the program’s
organization and goals, and provides a peek into our present implementation initiatives.
The rate at which a typical test program produces telemetry data is increasing
exponentially. A recent study determined that PCM bit rates have increased 100-fold over
the past 25 years. This is a factor of ten increase each 12 years. This is a result of new
sensors, higher speed platforms, and an increased user desire for real time information.
There is increasing use of video sensors, generating wide bandwidth information. As test
vehicles have become more automated, the amount and rate of digital computer data
needed to fully characterize operational results has increased. The increased speed of test
platforms, especially tactical aircraft has increased the rate at which data from
conventional sensors must be sampled. Finally, the availability of real time data on the
ground has been shown to save lives, money, and test time. Many programs that would
have used on-board recording and post-flight analysis now insist upon radiated telemetry
and real time analysis.
Acting to reduce the supply of spectrum available for these purposes is the newly
discovered value of radio spectrum when used for commercial purposes such as personal
communications and the distribution of music and video content. Pressed by commercial
interests, the World Administrative Radio Conference of 1992 allocated primary usage of
the 1452-1492 MHz Band to Digital Audio Broadcasting (DAB) in Canada and Mexico,
and allocated primary usage of the 2310-2360 band to DAB in the United States. Primary
usage of the 1525-1535 MHz was also allocated to Mobile Satellite Service. Although,
telemetry may still use these bands as a secondary user, a recent FCC study concluded that
the two uses are not compatible as the commercial services will cause unacceptable
interference to telemetry users of those bands. Spurred on by Congressional mandate and
the success of recent spectrum auctions, these bands previously available for telemetry
have either been lost or will be as soon as commercial use actually begins.
WHAT IS ARTM?
The impact of the confluence of these two factors was realized at the Air Force Flight Test
Center (AFFTC) during the fly-off testing and planning for the development testing of the
Advanced Technology Fighter (ATF), now the F-22. This test program indicated early-on
that increased bandwidth telemetry spectrum would be required in performance of the
development test program. At the same time we learned of the loss of TM bands to
commercial uses. We realized that in order to serve our customers we had to improve our
usage of the remaining TM spectrum.
ARTM started at AFFTC as “TM 2000,” with a goal of providing a common 15 Mbps
telemetry acquisition system that would be usable at all the open air test and training
ranges. It was envisioned as a complement to the CAIS program for a common vehicle
suite and an effort was made to obtain funding from the Central Test and Evaluation
Investment Program (CTEIP). As the program matured, it evolved from an Air Force
Program with tri-service application to a fully tri-service program and was renamed.
The ARTM Joint Program Office (JPO) has an Air Force Program Manager with Army
and Navy Deputies. It reports through a tri-service Executive Steering Group to the office
of the Assistant Secretary of Defense for Acquisition and Technology (A&T). The JPO is
the leader of an Integrated Product Team. Three Segment Implementation Teams; the
Platform Segment Team, the Ground Segment Team, and the System Integration Team are
responsible for the individual implementation projects. The JPO receives advisory inputs
from a Tri-Service Technical Group and a Tri-Service Requirements Group. These two
groups help the Program Managers to keep abreast of customer needs and the latest
technology while they focus on coordinating the efforts of the three Segment
Implementation Teams.
The ARTM technical goals are to push PCM telemetry to the limits of current technology
leveraging, where possible, commercial off-the-shelf (COTS) and other existing
technologies. Specific goals are to:
In pursuing these objectives, the program office has developed a positive business
strategy. At the head of the list we want to develop win-win solutions that satisfy the
diverse needs of the customer community. These include accommodating both airframe
and aircraft systems testers; looking at both test and training scenarios. Technically, we
want to make our progress within constraints of limited cost and risk. We plan to do this
through maximum use of open architecture, industry standards and where possible COTS
implementations. Finally, we want to identify and select the alternatives that offer the
highest payoff to the customer base and involve all of the open air ranges in participating
in the program.
Schedule-wise we are on a ten year time frame before full operational deployment of the
resulting system. At the present time we are exploring the concepts necessary to achieve
our goals. We are presently funding studies of available techniques through SE/TA
contractors and Small Business Innovative Research (SBIR) grants. Our status so far is
discussed below. We hope to expand the present Phase I (exploratory) SBIR programs
into Phase II (prototype development) programs if they yield useful results. Using the
prototypes developed under SBIR funding and commercially available techniques, we
expect to construct a prototype system to be tested and validated in the 1999-2000 time
frame. Base upon the success of the prototype we will award production contracts with a
goal of initial operational capability (IOC) for a production system in October of 2001.
The deployment of production systems will then occur over the next four years.
PRESENT INITIATIVES
While we wait for full CTEIP funding (expected in FY ‘97), we are starting preliminary
studies using funds from the SBIR program and a small amount of internal funding. These
initiatives include studies in compression, improved modulation efficiency, and coding.
An SBIR effort entitled “Avionics Bus Data Compression,” was advertised in SBIR 96.1.
This research addresses the increasing numbers of digital devices in modern aircraft
avionics and data acquisition systems which are interconnected with high speed busses.
The data transferred on these busses are critical to the performance of the vehicle and must
be monitored during any test program. However, most test-critical data are not generated
continuously, but in bursts. The intent of this research is to determine methods that will
reduce the radio frequency bandwidth required by this data and determine characteristics
of the data that may be used for discrimination. The goal of this effort is to reduce avionics
bus bandwidth requirements by a factor of two to four while maintaining data fidelity and
time correlation. As a result of the SBIR procurement, four proposals were received of
which two have been funded
A second effort, entitled “Optimal Utilization of Telemetry Spectrum,” was also advertised
in SBIR 96.1. This research focused upon more efficient assignment of the telemetry
spectrum through development of a demand assignment multiple access scheduling
capability. Such a method would replace the present practice of assigning large segments
of the telemetry spectrum to a single user for the duration of a test program. This research
would look at ways to reduce the frequency allocation required such as improved filtering
and smaller inter-channel spacing. Other approaches would include tunable transmitters
and a bi-directional data link to allow dynamic control of the spectrum allocation based
upon the phase of the present test and the needs of other users.
The third initiative is in the area of forward error correction (FEC) coding. We have
already sponsored an internal study to determine the present state of the art and the
direction we should look to find our technology. Coding is important to the achievement of
the ARTM objectives. While coding by itself leads to an increase in spectrum utilization,
intelligently applied it produces the more robust link which is necessary for the
transmission of compressed data. Overall, we expect both improved data quality and
reduced spectrum utilization through the use of coding. The space exploration community,
particularly NASA, is the leader in the application of coding to the telemetry problem.
Because of their long experience, they have developed techniques which appear to provide
a link quality in excess of our present needs. To fill their needs, they have also developed
an infrastructure to produce the necessary components at a very high quality level. In order
to use this available technology, we must learn more about the quantitative aspects of the
open air test and training scenario, which differs from that of space exploration. In
furtherance of this goal, we are proposing an additional SBIR task for inclusion in the FY
‘97 announcement.
PREDICTING FAILURES AND ESTIMATING DURATION OF
REMAINING SERVICE LIFE FROM SATELLITE TELEMETRY
ABSTRACT
This paper addresses research completed for predicting hardware failures and estimating
remaining service life for satellite components using a Failure Prediction Process (FPP). It
is a joint paper, presenting initial research completed at the University of California,
Berkeley, Center for Extreme Ultraviolet (EUV) Astrophysics using telemetry from the
EUV EXPLORER (EUVE) satellite and statistical computation analysis completed by
Lockheed Martin. This work was used in identifying suspect “failure precursors.”
Lockheed Martin completed an exploration into the application of statistical pattern
recognition methods to identify FPP events observed visually by the human expert. Both
visual and statistical methods were successful in detecting suspect failure precursors. An
estimate for remaining service life for each unit was made from the time the suspect failure
precursor was identified. It was compared with the actual time the equipment remained
operable. The long-term objective of this research is to develop a resident software module
which can provide information on FPP events automatically, economically, and with high
reliability for long-term management of spacecraft, aircraft, and ground equipment. Based
on the detection of a Failure Prediction Process event, an estimate of remaining service life
for the unit can be calculated and used as a basis to manage the failure.
Key Words
Telemetry, Analysis, Anomaly, Failure Prediction, Anomaly Prediction, Statistical Pattern
Recognition.
The U.C. Berkeley, Center for EUVE Astrophysics operates NASA's EUVE science
payload. EUVE telemetry is received by Tracking and Data Relay System (TDRS)
satellites and is forwarded to the CEA in Berkeley using NASA communications systems.
The EUVE primary mission was completed in 1994 and the EUVE scientific and
engineering community proposed ideas to extend the mission. The CEA collaborated with
the NASA Ames Research Center, the Jet Propulsion Laboratory, and Goddard Space
Flight Center (GSFC) in selecting projects that would have low investment costs and
significant project cost reductions. One idea proposed was to utilize the EUVE satellite as
a testbed for providing critical flight experience to technologies that would lower satellite
mission costs. Members of NASA's science team approved and participated in this activity
and named the project the EUVE Testbed program1. Another project proposed to validate
suspect failure precursors occurring prior to a hardware failure. The EUVE satellite was
experiencing failures that threatened to end the extended mission. If a failure precursor
could be identified for single thread hardware, it was hoped that an impending failure
could be “managed” to prolong the mission instead of bringing the mission to an end.
The CEA engineering staff began using analysis techniques developed and used
successfully on the Air Force GPS Block I program2. Author Losik led the contractor team
supporting the Air Force task to identify satellite telemetry behavior and GPS user range
errors on GPS Block I satellites between 1978 and 1984. It was his intent to augment the
same techniques used previously and modify them for identifying suspect failure
precursors in telemetry. Table 1 is a comparison between GPS and EUVE factors related
to the FPP events. The comparison is provided to identify the similarities and differences
between GPS Block I satellites and the EUVE satellite on which this work is based.
Table 1. Comparison Between GPS and EUVE Factors Related to the FPP
Software was created at the CEA that automated EUVE telemetry decommutation and
extraction of monitor values at the desired quantity and location around the EUVE satellite
orbit. This data was sent to separate files for importing to a commercial plotting tool called
MATLAB used to identify suspect failure precursors. MATLAB displayed the processed
telemetry in a format suitable for visually detecting suspect failure precursors. The process
of selecting specific data samples, taking samples at specific times, determining duration
between samples, extracting data, identifying specific data display requirements, and
evaluating data is known as the Failure Prediction Process (FPP). Unique uncorrelatable
telemetry behaviors are identified as suspect failure precursors and are known as FPP
events.
Background
Suspect failure precursors are found well within the normal operating range of analog
measurements. The ability to identify these hardware failure precursors in analog
measurements offers the following advantages:
An estimate was made after the first or initial suspect failure precursor was identified for
each of the hardware failures on the EUVE satellite. The results of the U.C. Berkeley
research included one failure that the remaining-service-life estimate was longer than
predicted. The estimate was for six months, but the unit operated for nine months (see
summary Table 3).
Both Transmitter A and B telemetry for identical time periods were analyzed and are
described here. Transmitter A telemetry showed no suspect failure precursors. Transmitter
B FPP processed telemetry showed a suspect failure precursor present in early December
1993. This was approximately four months prior to complete Transmitter B failure. The
duration over which the suspect failure precursor can be observed in the MATLAB output
was approximately 20 days. The suspect failure precursor was observable in the Forward
RF Power monitor only. After results were obtained by the FPP at the CEA on Transmitter
B, the Engineering Statistical Analysis Group at the Lockheed Martin Advanced
Technology Center in Palo Alto, California, was contacted to discuss the results and
determine their interest and capability in developing software that could replicate the FPP
visual evaluation of suspect failure precursors. Lockheed Martin agreed to analyze
Transmitter B telemetry using standard industry statistical techniques and proprietary
statistical pattern recognition analysis tools. Their results demonstrated that suspect failure
precursors behaved very differently from normal telemetry, and that FPP events could be
isolated and identified using their approach.
Figure 1 contains a set of FPP graphics for Transmitter B Forward RF Power which
illustrates suspect failure precursor events in the data stream. Transmitter A shows no
failure precursors. The mean and standard deviation are included for both.
Table 3 shows the EUVE hardware and telemetry analyzed by the FPP and the results for
failed Explorer hardware. Lockheed Martin analysis applied only to Transmitter B. The
analysis using the FPP uncovered a surprise gyro anomaly that was under investigation by
another satellite mission. Gyro C performance data was in use and the EUVE satellite
integrator was duly advised.
Figure 1. EUVE Transmitter A and B Forward RF Power FPP Plots
Table 3. Summary of All Results from FPP Analysis on Failed Explorer Platform
Hardware
EUVE FPP FPP Date Date of Time Estimate FPP %
Failures Event Event of Hardware Between for Accuracy
Analyzed Expected? Shown? FPP Failure Suspect Remaining to Date
with FPP Event Failure Service Life
Precursor
and Failure
Xmtr. A No No None None None > 6 months 100%
Xmtr. B Yes Yes 12/93 4/94 4.5 months < 6 months 100%
GYRO A No No None None N/A > 6 months 100%
GYRO B Yes Yes 1/93 5/93 4 months < 6 months 100%
GYRO C No Yes 6/92 operating operating > 6 months 100%
T/R A Yes Yes 3/94 12/94 9 months < 6 months 100%
T/R B Yes Yes 4/94 9/94 5 months < 6 months 100%
Part 2 by S. Wahl and L. Owen
Introduction
After reviewing the research by the U.C. Berkeley, CEA on the Failure Prediction Process
(FPP), the Lockheed Martin Advanced Technology Center authors proposed to explore a
unique methodology for detecting FPP events using statistical pattern recognition analysis.
Their intent was to determine if the statistical approach could detect the FPP events found
visually by the author Losik. For the data provided, the statistical analysis not only
detected the FPP events identified by the CEA approach, but also isolated additional
behavior patterns, which when reviewed by author Losik, were also determined to be
suspect failure precursors. Following are the methodology and results of that exploration.
Statistical Approach
The primary use of RECOGNIZE is to create a 2-class multivariate statistical model for
use in identifying and quantifying contributors to a problem. The model isolates those
variables that are the distinctions between acceptable and non-acceptable performance in a
tested population (learning set), and then classifies untested objects in terms of probability
of success. Another application is to create zero-time models based on product
manufacturing variables for the purpose of a baseline in product service-life studies.
RECOGNIZE was used to create multivariate “windows-in-time” for the FPP project. It is
the first application of the program to telemetry analysis.
Windows-in-Time Vectors
The computer vectors are projected into higher-dimensional space, and subsequently
projected to a 2-dimensional plot for visual analysis. Six vectors project to 6-dimensional
space. When plotted on a nonlinear map, the location of the computed vectors will
establish regions of normal component behavior and regions of probable component
failure. From this analysis, the time course to probable component failure will be
understood. Figure 3 is a conceptual nonlinear map showing plotted vectors related to
transmitter operation. The vector points are periodically plotted and represent a state-of-
health index for the transmitter. “Operational limits” refers to a predetermined operational
band that represents the vector limits for normal operation. The operational limits and the
vectors for known operational failures need to be established for actual spacecraft
components. Evaluation of similar components from multiple spacecraft will be required to
determine if there are common normal operational limits for specific components from the
same vendor used on similar spacecraft.
Figure 3. Conceptual Nonlinear Map: Normal Transmitter Vectors
Degrading to Failure
Transmitter B Analysis
Data was sampled from the EUVE satellite from September 1993 to April 1994 when
Transmitter B failed. The CEA data, shown in Figure 1, is based on the first data point
from each major frame in the selected data set. Data used in the Lockheed Martin
Advanced Technology Center exploration was the mean and standard deviation of each
major frame from the same data set. (A major frame is defined as the data from a 45-
minute sample; Reference 5 further defines the sampling method used.) Figure 4,
containing a set of univariate plots from the same sample time period, shows an example
of a suspect FPP event as identified by author Losik. This event, not detected on the
original CEA plots, shows an upward mean shift for an isolated period of time. This
pattern occurred within the specified operating parameters and would not have been
flagged by conventional specification limits monitoring.
For ease of viewing, a subset of the means data sample from EUVE Transmitter B was
created spanning the entire data period. As shown in Figure 5, extreme outliers were
included and vectors were generated using four of the five variables available. Two of the
variables in the data set were highly correlated; only one was selected for the windows-in-
time vector.
Figure 4. EUVE Transmitter B Variables Showing a Pattern Identified
as an FPP Event
Summary
As an exploratory data analysis effort, a vector profile based on sample EUVE Transmitter
B telemetry was mapped on a 2-dimensional plot. The vector components were selected
transmitter variables. The resulting vector profile established a region in hyperspace that
represented probable transmitter failure. The failure region correlated to the precursors
detected by the human expert. Several FPP events not seen by the expert in prior
evaluations were also detected.
It therefore appears that a pattern recognition analysis approach, using the RECOGNIZE
program, could provide an effective method for establishing time courses to component
failure. Data from additional satellites will be required to determine if the FPP events
observed prior to component failures on EUVE occur similarly on other systems.
Assuming such similarities are found and allowing for minor variations due to payload,
mission, and spacecraft bus configurations, it would appear probable that a generic
software package can be developed to automate the research methodology. Used at a
ground station, such a system would provide early warning of component degradation,
providing informed decisions on component management. Loaded on board a spacecraft,
an FPP software module could provide automated responses to known situations, thus
freeing ground station operators for other essential tasks.
It is the author Losik’s belief that the FPP sampling frequency, sample selection, and
quantity used allows engineers to identify suspect failure precursors independent of unit
operations, spacecraft attitude, attitude change rates, orbit shape, period, or any other
characteristic that affects the internal spacecraft environment. The FPP allows the visual
analysis of otherwise unobservable quantities (3 to 5 months worth) of telemetry on a
single display. It is because of the sampling frequency, size of samples, location of
samples, and display characteristics that the FPP can be used on equipment on any orbiting
spacecraft, aircraft, and ground-based equipment. A white paper is available that includes
a complete description of the work completed at the U.C. Berkeley, CEA and Lockheed
Martin Advanced Development5.
REFERENCES
1. Hughes, Peter M.; Durrani, Sajjad H.; Eller, Evan; Bruegman, Otto; “NASA's
Innovative Mission Operations Technology Development Program,” International
Symposium on Reducing the Cost of Spacecraft Ground Data Systems and Operations,
Rutherfold Appleton Laboratory, Chilton, Oxfordshire, U.K. 1995.
2. “GPS Orbital Test Report,” CDRL Item AOOA, Contract F04701-74-C-0527, 1978
through 1985.
3. Larson, Wiley J.; Wertz, James R. (editors); Space Missions Analysis and Design,
Second Edition, Microcosm, Inc., Torrance, California; Kluwer Academic Publishers,
Boston, MA.
4. “CEA/EUVE Monthly Engineering Report for February 1996,” Center for EUV
Astrophysics, University of California, Berkeley.
5. Losik, L.; Wahl, Sheila; Owen, Lewis; “Predicting Failures and Estimating Duration of
Service Life from Satellite Telemetry (expanded white paper),” Lockheed Martin
Telemetry & Instrumentation, San Diego, CA and Lockheed Martin Missiles and Space,
Sunnyvale, CA, 1996.
6. Owen, Lewis and Wahl, Sheila, "Pattern Recognition Techniques for Aging Process
Assessment of Elastomeric Products," SAE, Lake Buena Vista, Florida.
7. Owen, Lewis and Wahl, Sheila, "Pattern Recognition Classification Techniques for
Product Age Assessment," ASME, 2nd International Computer Engineering
Conference, San Diego, CA.
GETTING THE TELEMETRY HOME:
HOW DO YOU GET DATA BACK FROM TITAN?
B. J. Mitchell
The Johns Hopkins University/Applied Physics Laboratory
ABSTRACT:
KEY WORDS
WHY TITAN?
Titan is the most interesting satellite in the solar system from a number of perspectives. It
is the largest moon of Saturn; although it is a satellite, it is laraer than the planets Mercury
and Pluto. It has a dense atmosphere with a surface pressure about 1.5 times that of the
Earth. Most intriguing is the possibility that Titan may possess large lakes or seas of
cryogenic hydrocarbons such as ethane and methane [Lunine, et. al., 1983; Lunine, 1992;
Lunine, 1993]. Recent narrow band imagery of the surface of Titan reveals a very
non-uniform surface that may include lakes [Hubble Space Telescope Institute and
University of Arizona, 1996]. Given some proof that Titan possesses large bodies of fluid
cryogenic hydrocarbons, it could be considered a gigantic analog model of the Earth’s
climate system complete with land masses, moderately thick atmosphere, and large bodies
of liquid. By studying the climate of Titan. we could gain further understanding of the
processes and mechanisms that shape the Earth’s climate. A comparison of physical
parameters for both Titan and Earth is given below.
Table 1: COMPARISON OF PHYSICAL PROPERTIES OF TITAN AND EARTH
[Banaskiewicz, 1993; Lebreton, 1992; Lorenz, 1993: Lunine, 1983; Srokosz, M., et. al.,
1992; Zarnecki, et. al., 1992]
The Cassini/Huygens mission is due to be launched October 1997, with arrival at Saturn
during June 2004. The Cassini orbiter (built by NASA) will then study Saturn and its
moons. The Huygens probe (built by ESA) will enter the atmosphere of Titan during
September 2004, take measurements of various atmospheric parameters during the hours
long descent phase, and land a telemetered instrument package on the surface by use of a
parachute [Lebreton, op. cit.]. The possibility of hydrocarbon lakes or seas has been given
sufficient credence to where the surface science package for Huygens has been designed
for operations in cryogenic liquids (Zamecki, et. al., op. cit.]. The expected lifetime of the
probe after landing is on the order of minutes [Lebreton, op. cit.]. The Cassini orbiter will
continue to provide limited
aperiodic data on Titan during its multi-year mission around Saturn. Despite the data that
will emerge from these efforts, is certain that Huygens and Cassini will raise many more
questions that will require further exploration.
FUTURE MISSIONS
If Titan turns out to be as interesting as expected, there will be an impetus to conduct fully
global exploration of the surface, seas, and atmosphere. For example, on Earth one must
explore the tropics, the temperate zones, and the poles in order to build any sort of realistic
understanding of atmospheric and oceanic circulation. Future Titan exploration has been
discussed extensively by Lorenz [1994]. Among the various ideas are small “boats” with
expendable probes for the exploration of Titan’s seas [Mitchell, 1994] and balloon-borne
instruments [Lorenz and Nock, 1996] for global exploration of the surface and lower
atmosphere of Titan. (It should be noted that land “rovers” are not currently being
considered for the exploration of Titan. This is due to current models predicting that the
surface of Titan will be covered with organic sludge [Lunine, op. cit.; Lorenz, op. cit.] that
would be impassible to small vehicles.) In all cases, the surface exploration payload would
probably be on the order of tens of kilograms. This would mean that the power source
would be limited, creating problems with communicating data back to Earth.
With respect to cryogenic operations of explorers, the various weapons and space
programs have built a large amount of knowledge in the field due to the use of cryogenic
propellants and operations in space. However, this would have to be melded with
modifications of current meteorological and oceanographic instruments. Huygens is a good
example of some of the adaptations and their potential. It should be noted that cryogenic
conditions could be an aid in the field of power management. The temperatures are low
enough that high temperature superconducting materials could be used, thereby drastically
lowering power consumption and wastage.
Reliability is a major concern. Missions to the outer planets are not only expensive from a
launch vehicle point of view, but also due to the expense of a long cruise to the objective.
The Cassini/Huygens mission will take about seven years to reach Saturn even with the
use of gravity boosts from Venus, Earth, and Jupiter. Any follow-up exploratory missions
will need to be able to operate on site for years if continuous coverage of Titan is to be
attempted. This will require the use of redundant and fault tolerant systems as well as
multiple “rovers” equipped with at least some AI programming for autonomous operations.
GETTING THE DATA BACK: Radio Propagation
The bottom line of any space mission is the data returned to the scientists and engineers.
Succinctly, the equation that describes some of the physical limits of this data flow is the
familiar link equation:
One of the physical limits of any space mission is the power system. This translates
directly into a limit on the amount of data that can be forwarded via the telemetry systern.
At the current time, power available on deep space missions is on the order of tens of
watts.
The free space path loss is determined by the operating frequency and the distance
traversed as:
where:
Given a range of potential operating frequencies in S, X, and Ka bands and the extrema of
the Earth-Titan distance, the free space path loss is as follows:
Table 2: FREE SPACE PATH LOSS FOR TITAN-EARTH LINK
Maximum Minimum
Frequency Maximum Minimum
distance distance
(in MHz) loss (in dB) loss (in dB)
(in 109 km) (in 109 km)
2000 1.659 1.499 282.8 281.9
8000 1.659 1.499 294.9 294.0
10000 1.659 1.499 296.8 295.9
30000 1.659 1.499 306.3 305.5
(Minimum distance occurs when Saturn is at perihelion and the Earth is at aphelion and
both are on the same side of the Sun. Maximum distance occurs when both Saturn and
Earth are at aphelion and are on opposite sides of the Sun.)
As can be seen above, the difference between the maximum and minimum free space loss
due to orbital position is about 0.9 dB or about a factor of about 1.2 in linear terms. More
“harmful” is the geometric spreading loss difference between 2 GHz and 30 GHz - a factor
of about 23.5 dB.
To some extent, the geometric spreading loss difference between high and low frequencies
can be offset by the aperture size. From the formula:
Eq. (1.3)
where:
G is the gain in dB
A is the area of the antenna aperture
0 is the aperture efficiency
8 is the wavelength of the signal
Antenna
Radius
Frequency
1m 2m 5m
(MHz)
2000 32.44 38.46 46.42
8000 44.48 50.50 58.46
10000 46.42 52.44 60.40
30000 55.96 61.98 69.94
Gain (dB)
One can see that higher frequencies will give a higher gain for a fixed antenna size and that
larger antennas give higher gains for a given frequency. This works for both the spacecraft
transmit and ground receive antennas.
The final component of the link equation is the losses due to less than perfect efficiency in
the system, such as polarization mismatch, line losses, and other effects. These are
minimized by careful attention to the engineering details such as reducing receiver noise,
using high quality connectors, and other methods.
One of these other losses is atmospheric absorption. This is particularly acute above
20 GHz and is a major problem near the molecular oxygen and the water vapor absorption
frequencies (~ 60 GHz and ~ 22 GHz, respectively). A potential solution to this and rain
attenuation is to use orbiting antennas to receive the data which would be relayed to the
ground. This is not currently feasible or cost effective compared to the Deep Space
Network (DSN) in the radio regime. However, when the TDRS (Tracking and Data Relay
Satellite) H, I, and J satellites are launched starting 1999 [JPL, 1996], this could possibly
support very low data rate experiments at Ka-band. Once orbiting or lunar radio telescopes
go online, they could provide an edge over reception through the atmosphere on Earth at
Ka and higher frequencies.
There are two types of data forwarding to consider - direct links from the surface of Titan
to Earth and relays from Titan’s surface to an orbiter which forwards the data to Earth.
Direct Propagation From The Surface
For the case of telemetry propagated directly from the surface of Titan, power will be
severely limited. Radioisotope Thermal Generators (RTGs) are probably the only
reasonable choice for a surface power system since batteries alone would have too short a
mission lifetime. Solar cells would be grossly inefficient given the distance from the Sun
plus the very optically thick atmosphere of Titan.
In the case of data from an undersea probe, acoustic links to a surface boat may be most
advantageous. These expendable probes would have lifetimes on the order of minutes and
could be powered by batteries. This was previously covered by Mitchell [1994].
Given the use of an RTG for a boat or flyer, the size of the generator will govern the
cornmunications practices of the mission. For constant communications, the RTG scales
with the required power. If only aperiodic data dumps are needed, the RTG can be
somewhat smaller by accumulating “excess” power in a rechargeable battery and using
this excess during communications sessions. The size of the power system then becomes a
function of the RTG output, the storage capacity of the battery, and the communications
requirements.
Note that for this type of mission, a drawback is that every surface craft will require at
least this minimum power system mass to communicate directly to the Earth. This
translates into either a larger launch vehicle or a longer cruise time for the mission.
Any sort of mobile probe such as a rover, balloon, or boat will be in constant motion in six
degrees of freedom (x, y, z, roll, pitch, and yaw). With so many degrees of freedom and
the chance for rapid motion within many of them, some care must be taken that the proper
polarization for the transmit/receive unit is used. Circular polarization and helical antennas
are likely choices. In addition, circular polarization of the telemetry results in nullifying the
effects of Faraday rotation.
An alternative to direct propagation from the surface of Titan is to have the surface and
atmospheric probes relay their data back to a Titan orbiter which would then relay the data
to Earth. This has the advantage that each surface probe would need a much lower power
to propagate a signal from Titan to orbit. The difference in free space path loss is about
135 dB for an orbiter at 300 km vice propagation directly to Earth, a “savings” of almost
150 dB. The relay orbiter would be equipped with a heavy duty power supply to send the
data all the way back to Earth.
One advantage of this approach is that the mass of an individual probe is reduced since
less power is required to get the data to an orbiting relay. This could translate into more
surface probes for a given launch.
Another advantage of a Titan orbiter is its possible use as a navigational aid for surface
probes. Mitchell [1996] discusses the importance of “geo” location of a surface probe to
provenience its data. Without proper location readings, the value of data is greatly
diminished. (For example, it is interesting to note that surface winds or sea current is
3 km/hr. It is much more useful to get this data and the place where it occurred so that it
can be compared to other data from different regions or times.) The optimum way to do
geolocation is via a satellite navigation system similar to TRANSIT or GPS/NAVSTAR.
The TRANSIT type of system requires only one spacecraft but takes several minutes to
give relatively low accuracy geolocation data. A NAVSTAR type system takes on the
order of seconds to provide locations to within meters; however, this is at the cost of the
need for at least four orbiters.
With respect to polarization and antenna type, circular polarization is probably the best
choice. A small, low gain antenna could be enough to get the telemetry from the surface to
the Titan orbiter.
As in the direct propagation case, power would likely be in the form of RTGs on the
surface/atmospheric craft. However, the need to only get the data to orbit vice all the way
back to Earth makes the power requirements orders of magnitude lower. This would lower
the mass for the power system for each craft, resulting in more surface explorers for a
given launch mass.
An interesting aspect of using an orbiting relay satellite(s) is that the relay could rely upon
solar cells rather than RTG power. JPL has current plans [Penzo, 1996] for the use of
large, efficient solar cell arrays for outer planet missions. Drawbacks of the use of such an
array include size (remember that the solar flux is about 15 W/m at Titan vice 1370 W/m
at Earth’s distance), gravitational torques due to Titan and Saturn. and aerodynamic drag
on large array in orbit around Titan. In any case, the size of the power generator can be
made smaller by the use of storage batteries for times of peak consumption.
A clever variation of the orbiting relay approach was put forth by Lorenz and Nock
[op. cit.]. They propose using the Cassini Saturn orbiter for a relay back to Earth. Recall
that Cassini will act as a relay for data from the Huygens Titan probe. Using Cassini would
require aperiodic data tranfers when the Saturn orbiter was within range of the surface
probe. This would cut the cost for an orbiting relay, but increase the cost for RTGs since
higher powers would likely be required for the surface craft to contact Cassini. This power
would be related to the maximum distance for data relay. Another potential problem is that
by the time a follow-on mission to Titan arrived, it is likely that a substantial part of the
lifetime of the Cassini orbiter would be over.
Optical
Transmitter
Radius
Wavelength
0.5 m 1.0 m 1.5 m
(manometers)
500 119 122 125
1000 113 116 119
1500 109 113 115
Gain (dB
The mass of a laser telemetry system is comparable to that of an RF system and will
become increasingly lower in the near future. The laser system also requires less power
than an RF system for a given bit rate [Ibid.]. The reduction in power translates into a
further reduction in mass. In theory, the use of a laser link from Titan orbit to the Earth
would lower the mass of the system, resulting in either more instruments delivered to
Titan, the use of a smaller (cheaper) launch vehicle, or a shorter cruise time to Titan.
The receiver could be any given telescope in orbit around the Earth, such as the Hubble
Space Telescope and its successors. The advantage of an orbiting telescope is the lack of
atmospheric absorption and distortion of the signal. Ground based telescopes with adaptive
optics and artificial “guide star” capability could also be used as laser receivers, possibly
more economically than space based ones.
The use of laser links for deep space missions was conceptually proven by using the
Galileo probe’s camera to receive laser pulses from Earth based transmitters at distances
of up to six million kilometers [Ibid.] The remarkable thing about this test is that Galileo
was not built to use optical data links nor was the emitter built specifically for this test.
The Europeans and Japanese are planning to launch data relay satellites in the 1998
timeframe that will be equipped with the capability to use laser crosslinks to low orbiting
spacecraft such as SPOT-4 [Faup, Planche, and Nielsen, 1996; Jono, Nakagawa. Suzuki,
and Yamamoto, 1996]. This will provide practical experience with tracking and
transmission challenges and could lay the groundwork for later missions.
Initial plans for the exploration of Titan are being executed. If Titan is as interesting as
current models and data indicate, follow-ups to the Cassini/Huygens mission will become a
reality. One of the key systems for such a mission is the telemetry system, which in turn
depends on power.
There are a number of possibilities for such systems. After comparing direct transmission
from Titan’s surface to the Earth vice relay of telemetry through a Titan orbiter, the relay
concept holds many advantages: power flux at the Earth (which translates into bit rate),
power required for surface explorers, and the capability to do data proveniencing.
Investigation of transmission modes indicate that radio links from Titan’s surface to the
orbiting relay and laser links from the relay to Earth would be the most advantageous. All
these technologies are mature and should be cost effective in the early 21st century.
Although the whole concept of large scale exploration of Titan may sound esoteric, it
could provide a very large gain in our knowledge of Earth’s climate. This gain could
directly translate in lives and money saved through better climate and weather forecasts.
ACKNOWLEDGMENTS:
The author wishes to thank Ralph Lorenz of the University of Arizona, and David Porter
of the Johns Hopkins University Applied Physics Laboratory for their discussions and
feedback on possible Huygens follow on missions to study Titan’s oceanography. The
author also wishes to thank J. L. Mitchell for editorial comments.
REFERENCES:
Faup, M., Planche, G., and Nielsen, T. “Experience Gained in the Frame of SILEX
Programme Development and Future Trends,” Proceedings of the 16th International
Communications Satellite Svstems Conference, American Institute of Aeronautics
and Astronautics. Washington, DC, USA, 1996, pp 779 - 792.
Jono, T., Nakagawa, K., Suzuki, Y., Yamamoto, A. “Optical Communications System of
OICETS,” Proceedings of the 16th International Communications Satellite Systems
Conference, American Institute of Aeronautics and Astronautics. Washington, DC,
USA, 1996, pp 793 - 801.
Lambert, S. and Casey, W., Laser Communications in Space. Artech House, Boston, 1995.
Lebreton, J. P., “The Huygens Probe,” Proceedings of the Symposium on Titan. 1992,
287-292.
Lorenz, R. D., “The Surface of Titan in the Context of ESA’s HUYGENS Probe,” ESA
Journal. 93/4, 275-292.
Lorenz, R. D., Exploring the Surface of Titan. University of Kent at Canterbury 1994.
Lorenz, R. D. and Nock, K. T., “BETA - Balloon Experiment at Titan,” Proceedings of the
Second IAA International Conference on Low-Cost Planetary Missions. 1996,
Paper IAA-L-0606.
Lunine, J. I., Stevenson, D. J., and Yung, Y. L., “Ethane Ocean on Titan,” Science. 222,
1983, 1229-1230.
Lunine, J. I., “Plausible Surface Models for Titan,” Proceedings of the Symposium on
Titan. 1992, 233-239.
Lunine, J. I., “Does Titan have an Ocean? A Review of Current Understanding of Titan’s
Surface,” Reviews of Geophysics. 31,2, 1993, 133-149.
Mitchell, B. J., “Conceptual Design of a Modified XBT for Use in the Oceans of Titan,”
Proceedings of the International Telemetering Conference. 1994, 104-111.
Mitchell, B. J., “Global Exploration of Titan’s Climate: Off The Shelf Technology and
Methods as an Enabler,” Proceedings of the European Telemetering Conference.
1996, XVII - XXVII.
Penzo, P., “The Outer Planets:. Getting There Fast,” Proceedings of the Second IAA
International Conference on Low-Cost Planetary Missions. 1996, Paper IAA-L-
0601.
Srokosz, M., Challenor, P. G., Zamecki, J. C., and Green, S. F., "Waves on Titan,"
Proceedius of the Symposium on Titan. 1992, 321-324.
Zamecki, J. C., McDonnell, J. A. M., Green, S. F., Stevenson, T. J., Birchley, P. N. W.,
Daniell, P. M., Niblett, D. H., Ratcliff, P. R., Shaw, H. A., Wright, M. J., Cruise, A.
M., Delderfield, J., Parker, D. J., Douglas, G., Peskett, S., Morris, N., Geake, J. E.,
Mill, C. S., Challenor, P. G., Fulchignoni, M., Grard, R. J. L., Svedhem, H., Clark,
B., Boynton, W. V., “A Surface Science Package for the Huygens Titan Probe,”
Proceedings of the Symposium on Titan. 1992, 407-409.
A SMALL TELEMETRY SYSTEM
ABSTRACT
A small PCM telemetry system designed for the flight test telemetry task of a new
rotorcraft is introduced in this paper. It can provide a flexible frame format which is
completely set up by user in advance, to meet the requirements needed in different flight
testing phases. In this telemetry system, the data are low in rate and volume but very
valuable with stringent quality and transmission accuracy. Data encrypting and channel
encoding techniques are employed to guarantee the quality and security of the data. The
system architecture based on microprocessors is adopted in order to process the data
flexibly. Real-time data processing, monitoring and post-flight analysis are performed by
PC type computers. All key components of the system may be programmed. The cost of
the total system integration is relatively reduced.
KEY WORDS
INTRODUCTION
The flight test experiment program of a new rotocraft is being conducted by the Beijing
University of Aero. & Astro. As an important part of the program, a small telemetry
system has been developed by Division 202, Beijing University of Aero. & Astro. The
system is responsible for collecting the different kinds of telemetry raw data used for the
checking and controlling of the craft. It will be utilized to support the development and
qualification test of the craft. The system is critical to the successful conduct of test and
evaluation in the experiment program.
The telemetry system is comprised of the airborne data acquisition system and the small
PCM ground station. It can provide a flexible frame format to meet the requirements
needed in the different flight testing phase. By the various data interfaces, the system can
collect flight testing data in real-time, and perform the real-time & post-flight data
processing. In the system, most of its parts is under the control of microprocessors, which
will carry out the much work of data processing besides managing and monitoring other
parts, which means most of functions can be implemented by software instead of
hardware, thus the flexibility and adaptability of the system is enhanced. All of the system
parameters could be easily modified by software in advance and some software packages
used for different applications have been developed.
AIRBORNE SYSTEM
As shown in the Figure 1., the airborne system is by means of one analog interface and
two digital interfaces to gather the flight test data and information. The digital interface 1 is
connected to the flight control computer, from which the system could gather some
important flight parameters such as attitude, velocity and altitude, etc. These parameters
may be digital or on-off signals. The digital interface 2 is connected to the airborne
command system, through which each of instructions decoded by the command decoder
can be gathered and transmitted back to the ground, therefore, the ground operators can
realize whether each instruction is correctly send out and carried out, and high reliability of
the total command system is guaranteed. The primary function of analog interface is to
collect the various kinds of flighting-state data of the craft. The analog outputs from the
sensors are entered the interface to obtain amplification, level shifting, filtering, as needed
to provide a 0-5 Volt level for input to a microprocessor controlled A/D conversion device.
Under the control of the microprocessor and timing logic, according to a frame format,
each kinds of signals are collected in turn, and then the microprocessor would perform the
functions of the data encrypting and channel encoding to generate the final PCM data
stream followed by signal translation, which is sent to the transmitter.
The frame format is actually realized by a special data table embedded in the software, it is
easily modified by users in advance to meet the different requirements of flight test. The
secret keys are stored in an EPROM, it can be generated by the means of some random
functions. The secret keys can be changed by only updating a different EPROM at any
time such that the encrypted telemetry data will not be deciphered in a limited period.
The airborne data acquisition system will provide the data gatherings of analog signals,
digital signals, on-off signals and command signals. The frame length can be changed from
16 to 128 words. All channels are allocated a 16 bit word size, initial digitizations are 4 to
8 bit, subsequent data merging, encrypting and channel encoding expand the useful data to
16 bits. The analog interface can provide the input configuration of 16 differential channel
or 32 single ended channel, and random channel addressing. The accuracy of A/D
conversion is 8 bit. The digital interface is bi-directional and programmable. The frame
pattern is 16 bits optimum pattern EB90 recommended by IRIG, bit rate is 9.6K, and the
PCM steam output is in randomized NRZ.
GROUND STATION
The downlink telemetry signal is demodulated by a S-band receiver baseband. Through the
signal equalizer, the baseband signal is amplified and translated to a standard TTL volt
level signal, from which the bit synchronizer employs an all-digital Phase-Locked Loop to
extract bit clock signal and a digital in-phase integrate and dump circuits followed by a
threshold hard decision to recover the desired bits. Utilizing the recovered data and the
related in-phase clock, the microprocessor can perform the functions of frame pattern
detecting, establishing of frame synchronization and word synchronization, the data stream
decommutating, channel decoding and data deciphering. The final processed data are sent
into a dual-port communication buffer, through which the computer will read out all
needed raw data and implement a series of data processing functions, such as engineering
unit conversion, data analysis, storing, and real-time monitoring, etc.
Under the control of the microprocessor, data output parts can send out the special format
needed data in the form of parallel or serial, which is required by other related ground
support system. it has two primary functions as follows:
(1) The GPS data reflected the current position of the rotorcraft will be
continuously sent to the ground antenna tracking and controlling system in
order that the ground antenna can be collimated and directed toward the
rotorcraft, and thus, the ground radio system can work in the better conditions.
(2) The work state parameters of airborne radio instrument are provided to the
ground control room, so the operators could monitor their work state in any
time and take measures to control them in time.
In the ground telemetry system, the microprocessor act as a frame synchronizer, it can
establish and maintain the frame and word synchronization through employing changeable
error-bit dicision threshold and three-state synchronization protection. The frame format is
realized by a data table, as well as one in airborne system. The parameters of three-state
protection and division thresholds in various states can be set by the computer. In addition,
a timer of the microprocessor is used to serve a simple time code generator, which may be
reset by a external timing signal or by the computer itself.
The bit synchronizer has a loop bandwidth of 120Hz, it can maintain synchronization even
in the case of occurring 64 continuous “0” or “1”. The frame pattern decision threshold
may be set from 0 to 3, the check and protect frame numbers of three-state protection may
changed from 0 to 7. The accuracy of time code generator is 0.1 ms, the maximum timing
length is up to 4 hours. The raw telemetry data is stored on the hard disk of the computer,
the maximum time of data storing is up to 4 hours. The computer can provide real-time
displaying of selected important data in raw or engineering units in the form of digit and
graph.
Considered the simplicity of data processing, a linear block code is adopted in channel
encoding. Since the error bit rate of the data output from the receiver is 10-4, a (16,8) block
code which can correct two bits error, is selected to ensure the equivalent mean error bit
rate is less than 10-7. A method of fast finding table is employed to implement the data
encoding and decoding so that the large quantities of time previously spent by the
microprocessor is greatly reduced, and the microprocessor could perform the more
functions. Two data tables corresponding to encoding and decoding are all generated by
the computer simulation in advance.
CONCLUSION
The prototype has been accomplished, and the test to overall system has been done. The
results show that the design of system is reasonable and its performance is satisfiable. The
system reliability, flexibility and simplicity, and the transmission accuracy of the data were
significantly improved by the application of microprocessors. And this system will be
certainly put into practice.
ACKNOWLEDGMENTS
The author would like to thank professor Zhang Mingrui and senior engineer Yan
Shanpeng of department of electronics engineering, Beijing University of Aero. & Astro.
for their useful suggestion and cooperation.
REFERENCES
MICROPROCESSOR
A/D SIGNAL
AND CONTROL
CONVERTER TRANSLATOR
CIRCUITS
SIGNAL MICROPROCESSOR
OUTPUT
EQUALIZER AND CONTROL
PARTS
CIRCUITS
Monica M. Sanchez
Center for Space Telemetering and Telecommunications Systems
Klipsch School of Electrical and Computer Engineering
New Mexico State University
Las Cruces, NM 88003-001
ABSTRACT
NASA’s Space Network (SN) provides both single access (SA) and multiple access (MA)
services through a pre-scheduling system. Currently, a user’s spacecraft is incapable of
receiving service unless prior scheduling occurred with the control center. NASA is
interested in efficiently utilizing the time between scheduled services. Thus, a demand
assignment multiple access (DAMA) service study was conducted to provide a solution.
The DAMA service would allow the user’s spacecraft to initiate a service request. The
control center could then schedule the next available time slot upon owner approval. In this
paper, the basic DAMA service request design and integration is presented.
KEY WORDS
NASA’s Space Network (SN), Demand Assignment, Multiple Access, Doppler Shift,
Spread Spectrum
INTRODUCTION
NASA’s Space Network (SN) provides both single access (SA) and multiple access (MA)
services on the Tracking and Data Relay Satellites (TDRS) through a pre-scheduling
system. As technology advances, the spacecraft being deployed today have the ability to
perform self diagnostic tests. In the event of an emergency on board the spacecraft, the
ability to relay the emergency to the spacecraft’s ground station via a demand assignment
multiple access (DAMA) service would prove to be advantageous. The control center
could then contact the spacecraft owner with the service request information and next
available time slot. The spacecraft owner could then take the neccessary actions to remedy
the emergency.
The DAMA service would be an order wire type of service in that only identification of
spacecraft and type of service required would be transmitted [1]. The transmitted signal
would than have a low data rate. The DAMA service would have a unique Pseudo
Random Noise (PN) code assigned to distinguish between a scheduled MA user and
DAMA user. The proposed DAMA service requires that the current TDRSS receiver be
modified in such a way that the spacecraft has the capability of initiating an MA service
request without pre-scheduling.
Since the DAMA service is a demand assignment service, the SN does not know the
position of the spacecraft prior to reception of the incoming signal. Therefore, the DAMA
service requires a new method of using the phased array antenna configuration to provide
global coverage. With the global coverage, the problem of frequency shifting due to the
Doppler effect arises. The expected worst case frequency shift exceeds the current SN
constraint. Thus, a modification to the current SN receiver in the form of a DAMA service
processor is required. Figure 1 displays the integration of the DAMA service processor
with the current system.
The main obstacle in the design of the DAMA service processor is to recover the carrier
frequency of the incoming signal under severe Doppler shifts. This paper discusses how
the implementation of a bank of sub-band Fast Fourier Transforms (FFTs) which span the
Doppler shift frequency range are used to recover the carrier frequency in each sub-band.
Real-time signal processing is then performed to determine which carrier frequency
corresponds to a potential DAMA service user. The DAMA service processor can then
process the signal at a known carrier frequency.
ANTENNA CONFIGURATION
The MA system uses a 30 element phased array antenna. As a result, a spot beacon in the
direction of the desired spacecraft is achieved by appropriately weighting the array. For the
DAMA service, the location of the spacecraft and time of transmission is not known.
Therefore, a global beacon is required to assure that at any time or location TDRS is
capable of receiving the signal from an unknown spacecraft. Previous work was done in
determining whether or not the phased array antenna could be configured in such a way to
provide global coverage [2]. It was found that the use of only a single element in the array
could achieve a global beacon.
With the new single element configuration of the antenna, the antenna gain is 16 dB. Since
the DAMA service is of order wire type, the DAMA service request will be at a low data
rate of 1 Kbps. Due to the lower data rate and antenna gain, the link power must be
examined to insure enough power is seen at the TDRS MA antenna. A link budget was
performed to calculate the required and expected received power. The link budget
evaluations appear in Table 1. From the link budget, a large link margin can be achieved
with the low data rate of the DAMA service and the gain of the single element of the
phased array.
Both MA and DAMA users will be present in the single element of the phased array
antenna. This brings about the question as to whether the MA signal has higher gain than
the DAMA user. TDRS forms the incoming signal by multiplexing the 30 elements of the
phased array antenna. Therefore, each signal in each element in the array has the same
gain. The DAMA user and any MA users that happen to be in single element chosen for
the DAMA service will have the same gain acting on each respective signal. The defining
factor as to whether or not the MA users signal strength will out power the DAMA user
signal strength is the power flux density of each signal. The power flux density, S, is
determined by
(1)
where EIRP is the effective isotropic radiated power and d is the slant range [3]. Using
equation (1), the power flux density for the DAMA and MA user in the single element
were calculated to be -142.48 dBW/m2 and -140.98 dBW/m2 respectively. It can be seen
that the power flux densities are very similar. Therefore, the MA user and DAMA user
have the same relative power in a single element. By using a single element of the phased
array antenna, a global beacon can be achieved to provide both link closure and an
appropriate power flux density.
DOPPLER SHIFT
When the White Sands ground station configures the phased array antenna, the ground
station knows the incoming nominal frequency of the spacecraft to within ± 3KHz. So, the
ground station receivers have a constraint of ± 3KHz uncompensated Doppler shift that
can be accomodated. Therefore, the amount of Doppler shift to be expected due to the
global beacon must be examined. It must be determined whether or not the expect worst
case Doppler shift is within the constraint of the ground station receivers. The shifted
frequency due to Doppler, fd, can be found by using
(2)
where ft is the transmitting frequency, c is the speed of light, and Vs is the velocity of the
spacecraft [4]. The sign of the velocity depends on whether the spacecraft is moving
toward or away from the detector respectively, Using equation (2), the worst case Doppler
shift to be expected with a global beacon was calculated to be ± 64KHz. The ground
station receivers would be incapable of receivng a signal coming from a spacecraft with
the expected worst case Doppler shift. Thus, a method must be implemented to
accommodate the expected worst case Doppler shift.
The FFT is a signal processing tool widely used to determine the carrier frequency of a
signal. The carrier frequency of a DAMA user within a spectrum of scheduled MA users
can be found by combining the power of the FFT and the characteristics of a spread
spectrum system. By making the chip rate of the DAMA user much less than the scheduled
MA user chip rate, the DAMA user’s spectrum should be the prominent one with the MA
user looking like noise. The ratio between the DAMA user chip rate and the MA user chip
rate would be determined by the current constraints of the SN. The DAMA service should
not interfer with the current operation of the SN. The goal of the addition of a DAMA
service to the current SN is to broaden the capabilities of the SN which efficiently utilizes
the time of the SN.
The MA system has a specific chip rate for spreading which does not present a problem.
The problem arises with the data rate restrictions which correspond to a specific power
flux density. Currently, spacecraft must adhere to power flux density limits to avoid
interference with other users operating at the same frequency. Thus, new constraints on the
data rates that an MA user is able to transmit would defeat the purpose of the addition of a
DAMA service. Therefore, further research into the lowest chip rate that a DAMA user is
able to operate at must be done. Figure 3 shows the results of a simulation where an MA
user and a DAMA user are present in the spectrum.
The parameters of the simulation run to achieve the results in Figure 3 were an MA user
operating at 10Hz and a DAMA user at 30HZ which corresponds to a Doppler shift of
20HZ. The data rate of the DAMA user and MA user were 1Kbps and 10Kbps. The
results of Figure 3 assume that only one MA user was present at 10Hz. If more than one
MA user were present at 10Hz, the power of each signal would be combine. Thus, the MA
users spectrum would now be the dominant one. The DAMA user would now look like
noise at the number of MA users increased. The FFT would be incapable of detecting the
carrier frequency of a potential DAMA user.
Since all scheduled MA users are receivce by the SN at the same nominal frequency to
within ± 3KHz, simply taking an FFT of the incoming signal would not be sufficient in
determining the carrier frequency of a DAMA user. By filtering the incoming signal into
sub-bands spanning the Doppler shift range, the carrier frequency within each respective
sub-band could be determined by an FFT. The region over which the scheduled MA users
should appear would be assigned a separate MA sub-band. As a result, the spectrum of a
DAMA user at a lower or higher frequency would not be affected by the scheduled MA
users and the carrier frequency cold then be determined. Since the position of the
spacecraft in unknown, the DAMA user could have a Doppler shift which results in a
frequency equal to that of an MA user. Therefore, the signal power in the MA sub-band
would need to be compared against a threshold value equal to the signal power due to the
number of expected MA users. If the signal power is greater than the threshold, then a
potential DAMA user may be contained with that sub-band. Figure 4 is the block diagram
of the carrier recovery of the DAMA processor. Once the carrier frequency of the DAMA
user is determined, the signal is then demodulated and despread [5]. The control center
would then verify the spacecrafts SN authorization and contact the owner for further
action.
CONCLUSION
The research was supported through NASA Grant NAG 5-1491 to New Mexico State
University. I would like to thank Dr. Stephen Horan for his advisment on this research
project. Also, I would like to thank Chris Long, research assistant at New Mexico State
University.
REFERENCES
[1] Horan, Stephen, “An Operational Concept for a Demand Assignment Multiple Access
System for the Space Network,” NMSU-ECE-95-007, Las Cruces, NM, May 1996.
[3] Ha, T., Digital Satellite Communications, Second Edition, McGraw-Hill Publishing
Company, New York, 1990.
[4] Halliday, D. and Resnick, R.., Fundamentals of Physics, Third Edition, Jonn Wiley &
Sons, Inc., New York, 1988.
[5] Long, Christopher, “Data Processing for NASA’s TDRSS DAMA Channel,” NMSU-
ECE-96-005, Las Cruces, NM, May, 1996.
[6] Gardner, James, “Non-Coherent FFT Based Receiver for the GVLS,” NMSU-ECE-
95-009, Las Cruces, NM, November 1995.
[7] Dixon, R.C., Spread Spectrum Systems, Second Edition, John Wiley & Sons, Inc.,
New York, 1984.
[8] Ludeman, Lonnie, Fundamentals of Digital Signal Processing, John Wiley & Sons,
Inc., New York, 1986.
[9] Ziemer, R. and Roger, P., Digital Communications and Spread Spectrum Systems,
Macmillian Publishing Company, New York, 1985.
[10] National Aeronautics and Space Administration, “Space Network (SN) User’s
Guide," STDN No. 101.2, Goddard Space Flight Center, Greenbelt, Maryland,
September, 1988.
DATA PROCESSING FOR NASA’S TDRSS DAMA CHANNEL
Christopher C. Long
Center for Space Telemetry and Telecommunications Systems
The Klipsch School of Electrical and Computer Engineering
New Mexico State University, Las Cruces, NM 88003-0001
ABSTRACT
Presently, NASA’s Space Network (SN) does not have the ability to receive random
messages from satellites using the system. Scheduling of the service must be done by the
owner of the spacecraft through Goddard Space Flight Center (GSFC). The goal of NASA
is to improve the current system so that random messages, that are generated on board the
satellite, can be received by the SN. The messages will be requests for service that the
satellites control system deems necessary. These messages will then be sent to the owner
of the spacecraft where appropriate action and scheduling can take place. This new service
is known as the Demand Assignment Multiple Access system (DAMA).
KEY WORDS
INTRODUCTION
1. Design a system that can truly detect random requests with no prior knowledge
of a request.
The current system has two basic types of service. The first type of service is the single
access (SA) system. This is simply a time division multiplexed system (TDMA). The
ground station will gimble antennas on the relay satellite to point to the desired user
satellite. The satellite will then have a fixed, pre-defined, amount of time on the SA
channel. When the time has expired the system will set up service to another satellite. This
type of service is generally scheduled well in advance, and is very well defined. The
second type of service is the multiple access system. This is a code division multiple
access system (CDMA). Each user in this service shares a common carrier frequency (i.e.
single channel) but spreads its data with a unique Pseudo-random noise code (PN code).
The PN codes are orthogonal to each other and therefore allow the signal to be deciphered
in the receiver based on its particular PN code.
The system currently knows, a priori, the position of the various satellites using the MA
system. Therefore the ground station will electronically configure a 30 element phased
array antenna located on the relay satellite to receive the various MA signals. The ground
station will also predict the signal’s Doppler shift, which is caused by the user satellite
moving relative to the relay satellite. This, along with knowledge of the satellites orbital
parameters and its location relative to the relay satellite, will allow the system to
accurately predict the signal’s carrier frequency. This information is then used to configure
the MA receiver to the proper carrier frequencies for receiving the MA signals. Prior
knowledge of the Doppler offset, and thus the carrier frequency, of the incoming signal is
fundamental in recovering the desired data in the present system.
The goal of the new system is to allow satellites to use the MA channel to broadcast
random messages requesting service by the SN [1]. This service will be called the Demand
Assignment Multiple Access or the DAMA system. Once a request has been received by
the system it will be verified and sent to the satellite’s owner. The owner can then ignore
the request or take appropriate action such as scheduling time on the SA or MA channel
The single access (SA) portion of the network will not be affected in the new system. It is
important to note that in this case the ground station will not know when, where, or from
whom the service request is generated. This problem is addressed latter in the paper. The
basic flow of this DAMA system is shown in Figure 1.
As was discussed above the MA channel is a CDMA system. In the new system satellites
will generate service requests using a common PN code dedicated for service requests
only. This will enable other scheduled MA traffic and the random service request traffic to
share the common channel. A single element of the 30-element phased array antenna will
be used to service DAMA requests. The evaluation of the single element antenna
performance as well as link performances are contained in separate reports [3] and [4].
Finally, an outline of data processing, once the raw data is recovered, must be examined.
IMPLEMENTATION OF CHANGES
As was discussed above a common PN code will be used for all DAMA requests. A single
element on the phased array will also be used to receive DAMA requests. However,
because this is a random system the SN does not know the satellite’s location when a
service request is generated, and for that matter if any satellite is even broadcasting service
requests at all. This leads to the problem of DAMA signal detection and then prediction of
the DAMA users carrier frequency. The problem is really two fold. One, since the signal is
random it might or might not be present depending if requests are being made. Secondly,
because of the lack of knowledge of the satellite’s position when the request is generated,
the carrier frequency (affected by Doppler shift), is not known. An innovative
implementation of the Fast Fourier Transform (FFT) provides a solution to the above
problems [1] and [4]. In short, because this is DAMA request it is basically only an
orderwire service, that is only requests are made in this service. Therefore the data rate
does not have to be as high as the other MA users (the other MA users are transmitting
data). Slowing the data rate of the DAMA user will facilitate the detection of a DAMA
user in the MA frequency spectrum. Figure 2. Once the signal has been detected and
determined to be a DAMA user, analysis on its spectral components can be performed and
its carrier frequency can be determined [4]. The correct carrier frequency will be used by
the receiver processing the signal.
The proposed receiver configuration of the system is shown below in Figure 3. It consists
of the new Doppler predictor which basically determines if there is a DAMA user and then
determines the carrier frequency of the DAMA user. Once the carrier frequency has be
determined it is sent into a carrier recovery system which is basically a carrier tracking
system. Finally the carrier and phase information obtained by the carrier recovery loop is
then passed to a demodulator for processing. The design of the carrier recovery loop is
shown in Figure 4 and the demodulator design is shown in Figure 5. The carrier tracking
loop and the demodulator are standard and can be found in almost any communications
book.
To this point the emphasis has been paid to the reception and demodulation of the DAMA
users data. To complete this study the handling and use of the received data must be
discussed. Once the data is recovered it will go through following procedures.
1. The user must have valid authorization to use the SN. That is the user must have a
valid electronic identification to enter the system.
3. Determine if the request is valid for that particular user. That is the satellites
capabilities must match its request.
4. The next step is to predict the orbital parameters of the satellite to determine if
service is possible (i.e. satellite must be in view of a relay satellite at the time of
desired service).
6. Coordinate this information with the satellites owner. The owner can then request
the service, decline the service or if applicable explore alternative options.
7. If the owner requests the service the service will be scheduled in the computer and
the SN will then use the Single Multiple Access (SMA) forward service to inform
the satellite of the service.
8. If the owner doesn’t not desire service the SMA service will inform the satellite that
no service will be scheduled.
The majority of these steps can be easily accomplished with a PC and COTS software.
CONCLUSION
This design addresses the two main goals of the system. One, the use of a single antenna
element with global coverage eliminates the need for antenna pointing to receive DAMA
request. Therefore the SN requires no prior knowledge of a request. Combined with the
use of a dedicated PN code for the DAMA service makes this a truly random access
service. Second, with the adjustment of data rates for the DAMA service and the use of
front end signal processing to predict DAMA users and their carrier frequency, standard
receiver design can be employed to process the DAMA requests. Once the data is
recovered it can then be processed using standard commercial of the shelf data base and
orbital prediction software. This makes the system simple to design while still being
effective.
ACKNOWLEDGMENTS
This research was supported via NASA Grant NAG 5-1491 to New Mexico State
University.
I would also like to thank Dr. Stephen Horan (Associate Professor Klipsch College of
Electrical Engineering Department of Telemetry Research, New Mexico State University)
for his advice, guidance, motivation, and patience in this project.
Finally, I would like to thank my research partner Monica Sanchez (Research Assistant
Klipsch College of Electrical Engineering Department of Telemetry Research, New
Mexico State University).
REFERENCES
[1] Horan, Stephen, “An Operational Concept for Demand Assignment Multiple Access
System for the Space Network”, NMSU-ECE-96-007, NASA, Las Cruces, New Mexico,
May, 1996
[2] Simon, Marvin; Omura, Jim; Scholtz, Robert; :Evitt, Barry, Spread Spectrum
Communications, Volume 3, Computer Science Press, Inc., Rockville, Maryland
[4] Sanchez, Monica, “Demand Assignment Multiple Access Service For TDRSS”
NMSU-ECE-96-006, NASA, Las Cruces, New Mexico, May, 1996
[5] TRW, “Tracking and Data Relay Satellite System Operational Overview,” Report 1,
NASA, Las Cruces New Mexico, May, 1992
[6] Gardner, James, “Non-Coherent FFT Based Receiver for GVLS.” NMSU-ECE-95-
009, Sandia National Laboratories, Las Cruces, New Mexico, November 1995
(7) NASA, “WSGT Decoder Training Class Encoding/Decoding Principles and MA, SSA,
KSA Decoding Equipment,” NASA, Las Cruces, New Mexico
0-Hz-IF FSK/AM SUB-CARRIER DEMODULATOR
on a 6U-VME-CARD
Jonathan M. Weitzman
GDP Space Systems
300 Welsh Rd. Bld. 3
Horsham, Pa. 10944
215-657-5242 (Ext 36)
ABSTRACT
Aerospace Report No. TOR-0059(6110-01)-3, section 1.3.3 outlines the design and
performance requirements of SGLS (Space Ground Link Subsystem) services. GDP Space
Systems has developed a single card slot FSK (Frequency Shift Keying)/AM (Amplitude
Modulation) demodulator. An application of this service is the US Air Force Satellite
Command and Ranging System. The SGLS signal is tri-tone-FSK, amplitude modulated by
a modified triangle wave at half the data rate.
First generation FSK/AM demodulators had poor noise performance because the signal
tones were filtered and processed at IF frequencies (65, 76 and 95 kHz). Second
generation demodulators suffer from “threshold” due to non-linear devices in the signal
path before the primary noise filtering. The GDP Space Systems demodulator uses a 0-Hz-
IF topology and avoids both of these shortcomings. In this approach, the signal is first non-
coherently down converted to baseband by linear devices, then it is filtered and processed.
This paper will discuss the GDP 0-Hz-IF FSK/AM (SGLS) demodulator.
KEY WORDS
INTRODUCTION
SGLS has been around since the early 1960's. The first generation SGLS demodulators
(Figure 1) did all their filtering and signal processing at IF. This approach – minimal in
hardware – exhibited poor performance even in moderate noise. The second generation
SGLS demodulators (Figure 2) used non-linear devices (typically rectifiers) for down
converting the signal from IF to baseband where the signal was filtered and processed.
These non-linear devices worked well in moderate noise but exhibited threshold (and the
resultant poor performance) in noisy (Eb/No < 10 dB) environments. The 0-Hz-IF
BANDPASS
ENVELOPE S OUT
FILTER BIGGEST
DETECTOR
65 +/- 2kHz PICKER 0 OUT
AND
1 OUT
BANDPASS DIGITAL
IN ENVELOPE LOGIC CLOCK
FILTER
DETECTOR OUT
76 +/- 2kHz
BANDPASS
ENVELOPE CLOCK
FILTER
DETECTOR RECOVERY
95 +/- 2kHz
BANDPASS LOW
FULL - WAVE S OUT
FILTER PASS BIGGEST
RECTIFIER
65 +/- 3kHz FILTER PICKER 0 OUT
AND
1 OUT
BANDPASS LOW
FULL - WAVE CLOCK
FILTER PASS
RECTIFIER RECOVERY
95 +/- 3kHz FILTER
An ideal demodulator can filter its input signal at either IF or baseband with equal levels of
performance. However, the filter at IF requires twice the hardware of a baseband filter.
The sin(x)/x frequency spectrum has a peak at dc, nulls spaced at odd integer multiples of
the data rate and peaks of decreasing amplitude at even integers multiples of the data rate.
When a baseband sin(x)/x signal is used to modulate an IF carrier (Fc), this same frequency
spectrum is now translated up by Fc Hz. In addition, a mirror image of the spectrum also
starts at Fc but continues downward toward 0 Hz. The complexity of the IF signal is twice
that of its non-modulated baseband signal. For sin(x)/x signals, the baseband filter requires
imaginary axis zeroes at odd integer multiples of the bit rate, while the IF filter requires its
zeros at the carrier frequency plus AND MINUS odd integer multiples of the bit rate – an
equivalent IF filter requires twice the number of zeroes (and attendant hardware) are
required. Another disadvantage of IF filtering is that component tolerances affect the filter
response as a percentage of carrier frequency, while at baseband the error is a percentage
of the bit rate – a much smaller value.
The SGLS signal has an inherent disadvantage in noise performance. Its SNR (Signal to
Noise Ratio) varies from one bit period to the next. This is due to the timing relation
between the AM-half-clock-rate-triangle-wave and the FSK data. The data (and FSK
frequency) transition occurs one-tenth of a bit period before the triangle wave’s maximum
or minimum. The energy (and SNR) in a data period that started before an AM minimum
is about two dB less than a data period that stated before a maximum; however, the noise
power is equal in both bit periods. Therefore, SGLS demodulators start with a noise
performance disadvantage even with optimum filtering.
HISTORY
First generation SGLS demodulators used minimal hardware and offered minimal noise
performance. They used three bandpass filters as the principle noise filters with envelope
detectors at IF for data and clock recovery (Figure 1, 1 kbps data). In theory, the bandpass
filters could be made close to optimum for best noise performance; however, optimal
filtering at IF requires complex hardware and a labor-intensive alignment. In practice, the
bandpass filters were usually just good enough to eliminate grossly out-of-band noise
while not excessively corrupting the data. Filter bandwidths were typically two to four
times bit rate. These demodulators started off with about a five dB disadvantage (from
optimum) due to wide bandwidth filters and the resultant excess noise in the bit
detection/decision process. Because of the miserable noise performance, almost all first
generation SGLS demodulators have been relegated to the scrap heap.
Second generation SGLS demodulators (Figure 2, 1 kbps data) first down convert the
incoming signal to base band and then filter it. They use non-linear devices (typically full
or half wave rectifiers) for down conversion. The output of each rectifier is a baseband
signal which is then low pass filtered. It is these low pass filters that serve as the principal
noise filters. While this approach allows better filtering with less filter hardware than an
optimum bandpass filtered demodulator, it does have a significant short coming. The
rectifiers (as in all non-linear devices) exhibit threshold. Threshold occurs when, for a
given decrease in SNR at the input of a device, the SNR at the output decreases by a
significantly larger amount. Threshold in rectifiers occurs at an SNR of approximately five
dB 1 and in a noisy environment, this demodulator would have poor performance.
Typically, the bandwidth of the bandpass filters is four to six times the bit rate (a larger
bandwidth than the first generation demodulator) to avoid corrupting the data prior to
down conversion and low pass filtering. With the bandpass filtering that wide, threshold in
the rectifiers begins to occur at an Eb/No of between ten and fifteen dB. An obvious
solution is to use narrower bandpass filters, but there is a limit and more accurate (and
complex) bandpass filters would be required.
The 0-Hz-IF demodulator uses multipliers, not rectifiers, for non-coherent down
conversion and does not suffer from threshold. The 0-Hz-IF as well as the first and second
generation demodulators use similar "biggest pickers" (for data bit decision) and clock
recovery circuits. The biggest pickers and clock recovery circuits are not discussed in this
paper.
0-HZ-IF SGLS DEMODULATOR THEORY OF OPERATION
The input signal is first run through three bandpass filters. These filters serve only to limit
the out-of-band noise in the incoming signal. This ensures that the multipliers are operating
in their linear region. In the FDM001, these filters have a bandwidth of 12 kHz which is
large enough to pass the main lobe and most of the first side lobes of a two kbps signal.
The bandpass filter’s bandwidth is not critical and needs only to be wide enough to pass
the signal of interest without distortion, yet narrow enough to limit out-of-band noise.
Each of the three bandpass filtered tone signals is non-coherently down converted to
baseband. This is done by splitting each tone in two and multiplying by two quadrature
oscillators running at that tone's nominal frequency. The result of each multiplication is
low pass filtered to eliminate both the noise and the sum-frequency term. The filtered
terms are squared, summed and then square-rooted. This final square-rooted result is the
reconstructed baseband signal.
The 0-Hz-IF topology is only moderately demanding in its hardware implementation. The
quadrature oscillators need not be running at exactly the same frequency as the incoming
tone, but the frequency offset must be small compared to the data rate. The crudest crystal
oscillators (100 ppm) easily meet this offset requirement which also includes allowances
for the input signal's frequency uncertainty (100 ppm). The multipliers are running in their
linear region and do not suffer from threshold. The multipliers only need to have sufficient
signal overhead to stay in their linear region when high levels of noise are present. The
squaring and square-rooting devices after the low pass filters are non-linear and subject to
threshold; however, at this point the signal has been maximally filtered yielding the lowest
noise level possible, so threshold is not a problem.
The low pass filters are the principal noise filters. While an ordinary low pass filter would
suffice, the FDM001 card uses filters spectrally “matched” to the data. These matched
filters' time domain response is an integrate and dump waveform which offers a
performance gain of about one dB compared to equivalent non-matched low pass filters. A
matched filter is much easier to implement at baseband than at IF for the reasons
previously stated.
CONCLUSIONS
The 0-Hz-IF demodulator has several advantages over previous generations of FSK
demodulators. Primary noise filtering is done at baseband which yields filtering closer to
optimal and with less hardware than filtering done at IF. This gives a significant
improvement over first generation demodulators. The lack of threshold (and resultant
better performance in heavy noise) is the primary advantage of a 0-Hz-IF demodulator
over a second generation demodulator. The cost of the improved performance is an
increase in hardware complexity. Hardware complexity has less of an impact now than in
the past due to the availability of higher functionality IC’s. The second generation
demodulator’s threshold problem requires a good bit of care and complexity in the
bandpass filter design for just adequate noise performance. A second-generation, dual data
rate demodulator with reasonable noise performance requires two sets of moderately
complex bandpass filters to avoid threshold in the rectifiers. The FDM001 (a 0-Hz-IF,
dual data rate demodulator) requires only one set of crude bandpass filters, somewhat
mitigating the increase in hardware compared to a second generation demodulator, yet
has superior noise performance.
REFERENCE
The proof in this reference is for a squaring device. A squaring device can be thought of
as a rectifier working with only the fundamental term as its inputs. The threshold level of a
rectifier is within one dB of the value given for a squaring device.
APPENDIX
Each of the input FSK tones can be represented as Mn(t)sin(Tnt). Mn(t) is an on-off
switching function (having a value of either 1 or 0) and signifies whether the given tone is
present or not. Tn is the radian frequency of the specific tone. The subscript n has a value
of 1, 2 or 3, and respectively refers to the 65, 76 or 95 kHz tone (×2B is implied for the
rest of the appendix). Only one Mn can be 1 at any time – there is only one tone present at
any given time.
Refer to Figure 3. The signal path ending with Ve is used. The input signal is 65 kHz + )f
(M1 =1). The n subscript 1 is implied and not shown for both M and T. T1 is equal to 65
kHz. The signal at the output of the multiplier, Va, is:
The low pass filter eliminates the sum-frequency term (cos(2 Tt +)f). M(t) is modified by
the low pass filter but for now will still be referred to as M(t). )f is small and the low pass
filter has no effect on cos(-)f). The output of the low pass filter, M(t)cos(-)f), is squared
and at Vb yields:
The square root of Vd is M(t), which has a value of either 0 or 1 depending on whether the
tone is present or not. This is a recreation of the original data.
Modification of M(t) by the low pass filter was mentioned earlier in the appendix. The
M(t) which occurs at Ve is equivalent to NRZ data that has been run through low-
pass/matched filters. The signal at Ve is an integrate and dump waveform and is optimal
for noise performance in a sampled data system.
DIGITAL FSK/AM/PM SUB-CARRIER MODULATOR
on a 6U-VME-CARD
Theodore J. Hordeski
ABSTRACT
Aerospace Report No. TOR-0059(6110-01)-3, section 1.3.3 outlines the design and
performance requirements of SGLS (Space Ground Link Subsystem) uplink
services equipment. This modulation system finds application in the U.S. Air Force
satellite uplink commanding system. The SGLS signal generator is specified as an
FSK (Frequency Shift Keyed)/AM (Amplitude Modulation)/PM (Phase
Modulation) sub-carrier modulator. GDP Space Systems has implemented, on a
single 6U-VME card, a SGLS signal generator. The modulator accepts data from
several possible sources and uses the data to key one of three FSK tone frequencies.
This ternary FSK signal is amplitude modulated by a synchronized triangle wave
running at one half the data rate. The FSK/AM signal is then used to phase modulate
a tunable HF (High-Frequency) sub-carrier.
A digital design approach and the availability of integrated circuits with a high level
of functionality enabled the realization of a SGLS signal generator on a single VME
card. An analog implementation would have required up to three rack-mounted units
to generate the same signal. The digital design improve performance, economy and
reliability over analog approaches. This paper describes the advantages of a digital
FSK/AM/PM modulation method, as well as DDS (Direct Digital Synthesis) and
digital phase-lock techniques.
KEY WORDS
SGLS, FSK/AM/PM, Ternary FSK, Dibit Command Data, AM Phase Delay, Digital
Phase-Lock Loop, Direct Digital Synthesis, Numerically-Controlled Oscillator
INTRODUCTION
GDP Space Systems has developed the FSK001, a single 6U-VME card SGLS
signal generator. See Figure 1 for a graphical representation of the SGLS FSK/AM
waveform. Earlier analog approaches to SGLS signal generation required three
separate chassis: (1) a frame synchronizer with a phase-lock loop to extract the
frame-formatted dibit encoded data and generate a rate-adjusted clock, (2) an
FSK/AM modulator and (3) a phase modulator. The analog designs have limited
tunability, require labor intensive alignments and are prone to drift over time and
temperature. The digital techniques used in the FSK001 and the availability of high
level functionality IC’s have reduced the circuit board real estate required to
generate the complex SGLS waveform. Additional benefits are increased flexibility,
accuracy and reliability with virtually no required alignment.
HISTORY
There are two common analog methods to generate a ternary FSK signal and both
have shortcomings. One method uses three oscillators free-running at the required
FSK tone frequencies. The modulating data controls a switch to select the
appropriate tone for each bit period. Amplitude variations of the resultant FSK
signal, which originates from three independent sources, require alignment to
remove. Furthermore, at each switching instant, the FSK signal exhibits a phase
discontinuity; a nearly instantaneous jump to the arbitrary phase of a different
oscillator. This phase discontinuity causes the signal to occupy a larger bandwidth
than would be required if the phase remained continuous through the frequency
transition. A second approach uses a single VCO (Voltage-Controlled Oscillator)
whose output frequency is driven to each of the three FSK tone frequencies by
applying one of three different DC voltages. In that the VCO frequency cannot
change instantaneously, the output has no phase discontinuity, i.e., it is phase-
continuous. VCO’s with a sufficient range of frequency to generate all the SGLS
tones have transfer functions that typically have linearity errors on the order of 5%
and are also prone to drifting. Even with compensating circuitry and intensive
alignment, this approach suffers with relatively large frequency errors.
The SGLS signal format requires the FSK signal to be amplitude modulated by a
modified triangle wave which is synchronized with a specific phase relationship to
the FSK frequency transitions. Historically, to accomplish this required a real estate-
intensive analog phase-lock loop. The triangle wave, which has a frequency of one-
half the modulating data rate, is phase-locked (synchronized) with a specific phase
delay to a clock derived from the modulating data clock. This phase delay, once set
in the loop design, has limited range and is not readily adjustable.
(1) A frame sync for extracting and decoding dibit command data and frame
parity checking.
(2) A phase-continuous ternary FSK modulator with full selectability of the
frequency tone set.
(3) Amplitude modulation with full selectability of the modulation index and
phase delay.
(4) Phase modulation by the FSK/AM signal with a fully tunable sub-carrier
frequency and modulation index.
(5) An on-board PRN data generator with a fully tunable data rate.
In addition to the real estate economy, the FSK001, using digital signal processing
offers significant strides in performance over the analog approach. DDS is used in
the generation of the FSK tone frequencies, the PM sub-carrier and the PRN clock.
One OCXO (Oven-Controlled CRYSTAL Oscillator) with an accuracy of better
than 1 PPM provides timing for all of these signals. The use of NCO’s
(Numerically-Controlled Oscillators) provide a high degree of accuracy and
flexibility. Applied to the FSK001, they effectively eliminate errors in: FSK
frequency and amplitude variation between FSK tones; PM sub-carrier frequency
and modulation index; amplitude modulation index and phase delay; and PRN clock
frequency.
This digital approach yields an extremely versatile modulator. FSK modulating data
is selectable from several possible sources including: (1) dibit command data, (2)
direct ternary tone control data, (3) external serial data and (4) an on-board PRN
data generator. Complete versatility is provided in the selection the FSK tone set,
with each tone being tunable from 1 Hz to 5 MHz. The FSK signal can be
modulated at data rates from 1 bps to 500 kbps. However, if amplitude modulation
is required, then the data rate is constrained to 1 kbps, 2 kbps, 5 kbps or 10 kbps.
The amplitude modulation index is adjustable from 33 to 100 percent modulation
and phase delay is adjustable from 0 to 100 percent of the bit period. The phase
modulation index is adjustable from 0 to 3.14 radians on a sub-carrier which is
tunable from 1 Hz to 5 MHz. The FSK/AM/PM analog output of the FSK001 can be
level adjusted from +20 dBm to -30 dBm (into 75 ohms). The GDP Model 784 is a
rack-mountable, 3.5” high chassis housing the FSK001 modulator and a
microprocessor controller. The controller handles data entry from a front panel
keypad to setup the modulator, and it provides status feedback on a 2x40 alpha-
numeric front panel display. Remote interface capability is available for unit setup
and status feedback.
Refer to figure 2 for a functional block diagram of the FSK001. Dibit command data
is the primary source of data for SGLS signal generation. When dibit command data
is selected to modulate the FSK signal, it is accepted in a serial frame format of 48
bits per frame including 8 bits of overhead. Input data rates are limited to 2.4 kbps,
4.8 kbps, 12 kbps or 24 kbps. The frame sync locks to the input data by identifying
a 7 bit sync word repeated in each frame. The frame sync, once locked, strips off
the sync word and checks frame parity against an additional parity bit in each frame.
The remaining 40 bits of serial data, with gaps left by the stripped overhead bits, are
de-multiplexed into two parallel (dibit) data streams of 20 bits each and clocked out
of the frame sync by a gated clock which is missing cycles corresponding to the
overhead bits. A new clock running at a rate of 20/48ths of the input clock rate is
needed to eliminate the data gaps. This is accomplished with a smoothing buffer
consisting of: a FIFO memory, a VCO and a PLL loop filter. The gapped data is
clocked into the FIFO by the gated clock and clocked out of the FIFO by a clock
derived from the VCO. The FIFO Half-Full Flag serves as a phase detector that
generates an error voltage which becomes the control voltage for the VCO by way
of the loop filter.
The three remaining sources of data require less processing than dibit data. Ternary
(1, 0 and S) data directly drives each of the three FSK tone frequencies with the
restriction that only one tone be activated during any clock cycle. The occurrence of
two or more simultaneously active lines is interpreted as an error condition and the
modulator continues to output the last valid tone keyed. External serial data and the
internally generated PRN data cannot be used to drive ternary FSK tones. Instead,
any two of the three tones are selected to be keyed by the ones and zeros of the data
stream to generate binary FSK.
The FSK signal is amplitude modulated by a modified triangle wave. The triangle
wave is digitally generated using an 11-bit up/down counter. Modulation index is
controlled by setting two variables; terminal count up and terminal count down. The
VCO in the smoothing buffer is running at 10.24 MHz which may conveniently be
divided down to generate one of the four available bit rates for AM operation. The
VCO clock is first divided by 1, 2, 5 or 10 to generate a clock running 1024 times
the bit rate in order to drive the triangle wave generator (counter). The phase delay
of the AM envelope to the FSK frequency transition is controlled with an additional
variable which forces the counter to some value (within its count range) coincident
with the rising edge of the one-half bit rate clock. The phase delay of the AM’ing
signal can be set with a resolution of 1 in 1024. The FSK signal and the triangle
wave are digitally multiplied to produce the FSK/AM signal. AM can be turned off
by forcing the counter to hold at its terminal count up.
CONCLUSION
Td = Phase Delay
RCLK
DIVIDE
BY 2
SYSCLK
OCXO ANALOG SGLS
VAR
MUX DAC
GAIN
DIG MOD 10
LOAD
AM ENVELOPE
(TRIANGLE WAVE) ENV 11
GENERATOR
TC UP
TC DN
SYNC
ABSTRACT
In this paper, a method for digital correlation detector that takes advantage of the frame-
synchronization-pattern feature of coincidence rate and adopts a multiple-bit detection
window is proposed. Based on this method, a new digital correlation detector with a neural
network is designed. It can recognizes frame-synchronization-pattern with error bits and
slippage bits correctly, which has been approved practically according to the experimental
results.
KEY WORDS
Telemetering system, Synchronizer, Digital correlator, Neural network
INTRODUCTION
In the traditional frame synchronizer, the detection of frame-synchronization-pattern is
performed by digital correlator, and time domain automatic regulation is performed by
frame synchronizer protection strategy circuit. To improve frame-synchronization-pattern
detection performance, digital logic method such as frame-synchronization-pattern
correlation and slipping window (allowance for certain bit error ) is used in digital
correlator, but it can not recognize frame-synchronization-pattern effectively in
complicated noisy environment, therefore, a new method is urgently needed to improve
frame-synchronization-pattern extraction performance.
A digital correlation detector model, which takes advantage of the frame-synchronization-
pattern feature of degree of coincidence and adopts a multiple-bit detection window, is
proposed in this paper. It can get the drift rate of bit slippage accurately. Based on this
model, a new digital correlator with a neural network is designed. The experimental results
show that this detector has high detection rate and can recognize frame-synchronization-
pattern with error bits and slippage bits correctly.
METHOD TO EXTRACT FRAME-SYNCHRONIZATION-PATTERN
1. The model of digital correlation detector
The detection algorithm of digital correlator considering frame-synchronization-pattern
coincidence can be implemented as follows.
Assuming that the present frame-synchronization-pattern in correlation detector is
S = {s1 , s2 ... sn }. At the synchronization detection bit, the content of receiving code stream
through shift register is R = {r1 , r2 ... rn }. Assuming τ is the offset rate of the detecting
frame-synchronization-pattern comparing with synchronization detection bit, therefore,
τ ≤ ξ 〈n , ξ is the error threshold of permitted offset. We define a logic operator F , when
τ is a positive integer, F τ {si } denotes {si } sequence right-shifting by τ bits. While τ is a
negative integer, F τ {si }denote {si }sequence left-shifting by τ bits. At the point of
synchronization detection bit, the coincidence rate of S and R is given by:
n− τ τ
[
∑ F (si )⊕ ri
] 0≤τ ≤ξ
Ω SR = in= 0 (1)
[ ]
∑ F τ (si )⊕ ri
i =1+ τ
−ξ ≤τ ≤ 0
2. Selection of samples
Firstly, we should construct the proper telemetry data, then add the random noise onto
them, so we get the jammed telemetry data. Of course, the frame-synchronization-pattern
is jammed in certain degree. Selecting the jammed frame pattern group as learning samples
and make its output as “11”, “01”, “10”, representing no offset ( τ = 0 ), left-offset by 1 bit
( τ = −1 ), right-offset by 1 bit ( τ = +1 ) , respectively. Because sample selection is partially,
that is, the 16 bits optimum frame-synchronization-pattern EB90 is selected in the case of
Pe = 10 −1 , ε = 2 , so that in the case of τ = 0 , ε > 2 and τ = ±1 , ε > 1 , some of the samples
should be selected as input vector with their outputs designed to “00”, excluding those
samples which have been selected from the jammed frame-synchronization-pattern group
as learning samples. Therefore, neural network can provide a discrimination result of non-
learning samples closest to real conditions according to statistical rule of channel noise it
has learned. Of course, there are possibilities of detection error, and the detection result is
analyzed in the next section.
Figure 1 illustrates part of a data file jammed by random noise (the blacken part is jammed
frame code samples).
eb9009e8c4dee6a7d466f6cb237b8a9327942a7e582a4ec46d27dce0a519999 τ =0, ε =0
eb900a8070dbbcee6a7d67902a7b825e893c370c5c7c5b5346db753c5c100c5 τ =0, ε =0
75c806235dac14c7dbc45affcb865456c84de4789b2b7432b7d78357f1e4255 τ =1, ε =0
77c806b5a00d5258e703c798abeada75d13269bdfcc823440230900178708d7 τ =1, ε =1
... ... ... ...
6b9819e91cf2517c5d9089b47684da6457bd6e8447a098719646f3a0a93ffb τ =0, ε =2
eb903216b85d730ea9c47be687e45842898c864369de85eea68943687bf8eb9 τ =0, ε =0
eb943380a91c70428cea857b83f2702a3846c745eba73057d94352165667c5c τ =0, ε =1
e9903647528a84d8e94b8567392d9e02a0287cc8352de73baed856438782763 τ =0, ε =2
... ... ... ...
d7206b494b6ea937c8e6db40210abe648822098bde7638a9dce94653dca6f71 τ =-1, ε =0
df2071990be893a7dcbe8945639083563960bddec9463cd90a903649bdc8365 τ =-1, ε =1
97200905913e944fab86746e744fab68e151b82dae85527586457d118ba8775 τ =-1, ε =-1
... ... ... ...
{ ( ) (( ) )}
D * (KT ) = φ D(KT ), µ ⋅ A(KT ),ν ⋅ Γ K N ⋅ B K N T (2)
where T is the code width, kT is the kth code time, N is the frame length, D * (KT ) indicates
the actual signal value after jamming when t=kT, which is function of
( ) (( ) )
D(KT ), µ ⋅ A(KT ),ν ⋅ Γ K N ⋅ B K N T , represented by φ here; D (KT ) denotes the original
signal value when t=kT; A(KT ) is the noise causing code error with a standard normal
distribution, and µ is its jamming coefficient; B((K / N )T ) is the noise causing code
slippage and frame-synchronization-pattern offset caused by noise, also with a standard
normal distribution, and ν is its jamming coefficient. ()⋅ means round-off, and Γ () ⋅ is given
by formula (3):
1, [K / N ] = K / N
Γ (K / N ) = (3)
0, [K / N ]≠ K / N
When µ ⋅ A(KT ) is bigger than noise threshold VA , the bit reverses (0→1, 1→0) to simulate
( ) (( ) )
code error. When ν ⋅ Γ K N ⋅ B K N T is bigger than noise threshold VB , 1 bit is inserted or
deleted from data within [k/N] frames to simulate frame offset. Providing the total frames
simulated is M, L denotes the points of noise, then the signal-noise ratio(SNR) can be
defined by formula (4):
M×N−L
SNR = 10lg (dB) (4)
L
To test the performance of neural network correlator, the training results obtained from
former step are used in frame-synchronization-pattern detection of simulating telemetry
data with different SNR, and corresponding results are displayed in table 2.
SNR (dB) 20.5 18.8 17.4 16.2 15.1 14.1 13.2 12.4 11.6 10.9 9.7
reiterating 98.5 97.2 95.8 94.2 91.5 88.0 85.9 80.6 75.8 70.4 60.6
dectection 2000 times
rate (%) reiterating 95.6 93.2 90.5 87.8 84.0 79.7 75.8 70.4 64.4 58.7 49.1
1500 times
3. Conclusion
The digital correlator implemented in this paper has simulated the case of 16 bits frame-
synchronization-pattern length and [-1, 1] detection window, we can conclude that a
typical, big enough training sample set is very important to frame-synchronization-pattern
correlation detection. Besides, during jammed data detection, as SNR decreasing,
detection result will be impaired. But even if SNR decreases to 11.6dB, the correct
detection rate can still reach 75% and higher after 2000 times of reiteration, which
undoubtedly shows the advantage of neural network digital correlator.
REFERENCES
(1) Acampora A. S. et al. Frame Synchronization Concept for TDMA Burst Modems,
IEEE Trans. on Aerosp. Elect. Syst., 1980, 16(2), 169-179
(2) Shark L. K. et al. Adaptive Frame Synchroniser for Digital Satellite Communication
Systems, IEE Proceedings, Feb, 1988, 135, 51-59
(3) Lembeck M. F. Design Development of a Neural Network-based Telemetry Monitor,
N93-11933, 1993, 1-7
REMOTE CONTROL MULTIPLE MOBILE TARGET
SYSTEM WITH CDMA
ABSTRACT
At present, multiple mobile targets will be remote controlled in many remote control and
telemetry system, in which multiple access technology will be applied.
This paper proposes a communication scheme to remote control multiple mobile targets
using Coded-Division Multiple Access(CDMA) technique. It’s feasibility ,advantage and
shortcoming are analyzed. Moreover, the key techniques of Direct-Sequence Spread
Spectrum(DS/SS) system, i.e. the correlation detection and delay lock-on techniques, are
studied and stimulated on the experimental model. The results of theoretical analysis show
that the CDMA system has the peculiar advantage over the conventional multiple access
system, such as FDMA and TDMA.
KEY WORDS
INTRODUCTION
With the development of military and civil technology ,in more and more fields, point to
point telemetry and remote control system have not been able to meet specific requirement,
hence multiple targets remote control came into being. In this paper, the system of several
mobile targets being controlled by a center is introduced. The control of several targets is
related to the question of multi-access. Thanks to the unique merits of spread-spectrum
CDMA comparing to that of traditional CDMA. Here, CDMA technique is employed. The
model of given system is first described, then the key technique in spread-spectrum
CDMA systemthe scheme of PN acquisition and synchronizing track is analyzed. The
performance of CDMA system is also considered. At the end, the feasibility of this
technique applied in given system is concluded.
DESCRIPTION OF SYSTEM MODEL
At the receiver end, local PN must synchronize with sending PN, so as to perform
despread and demodulation. In order to implement PN synchronization, two questions of
PN acquisition and tracking are introduced, and they are the most important questions in
spread-spectrum communication.
Search strategy is employed in this system, while coherent detection is dual-pause time
system. The block diagram is given in figure 2. It has two integral periods. One is for rapid
search of code state, and integral time is τD1(comparatively less);The other is for further
estimation of PN synchronization, and its time constant is τD2. Through analysis,we can
see that the former provides certain false-alarm protection,while other protection is
distributed to the latter,and the latter can assure higher detection probability and less false-
alarm probability. The acquisition time of the latter is smaller than that of the former.
Figure 2 Block Diagram of Acquisition Circuit
Single ∆-value tracking loop is applied here, as shown in figure 3. If not taking into
account the influence of filtering upon PN waveform, the input signal can be expressed as
follows:
S (t) = Ps C ( t + T ) C o s [ ω 0 t + φ ( t ) ] (1)
Through computation, we can gain:
X (t ) = Ps [ R(T − T1 − ∆ / 2) − R(T − T1 + ∆ / 2)] + ns + nn (2)
where R(T − T1 − ∆ / 2) correlation parameter.
T-T1steady delay between input and local PN code.
Ps average power of input signal.
C (t + T ) the sending address code.
ns white Gaussian noise in both upper and lower branches.
nn multiplying noise of local PN and input interference in both upper and lower
branches.
Through this figure, when (T − T1) / ∆ <1/2, local PN code can follow outer sequence with
linear change and keep tracking; when R( ∆ / 2) − R( − ∆ / 2) = 0 ,the receiver synchronizes
with the sender completely; when (T − T1) / ∆ >1/2, the loop is out of lock.
Above analysis shows that acquisition circuit implement the rapid search of sender PN
code by using local PN code, so as to adjust the phase-error between local PN and sending
PN to (T − T1) / ∆ <1/2. Then delay-lock loop starts to work, the lock state is kept by
controlling VCO with error signal. Experiments have proved that the scheme can perform
despreading quite well.
We assume there are M stations working at the same time, and the number i station’s
signal is
X (t ) = Ci (t − t i ) Ai Cos[ω 0 t + φ i (t )] (3)
where Ci (t − ti ) C is PN sequence.
In order to detect the number i station’s signal, through relational detection of local
sequence with received signal, we obtain
M −1
E (t ) = Ai Cos[ω 0 t + φ i (t )] + ∑ Ck (t − t k ) Ak Cos[ω 0 t + φ k (t )] + n(t )Ci (t − ti ) (4)
k =0, k ≠i
M −1
∑ Ck (t − t k ) Ak Cos[ω 0 t + φ k (t )] C is mutual interference which can be regarded as white
k =0, k ≠i
noise.
n(t )Ci (t − t i ) is white noise process.
Therefore, through a band filter whose bandwidth is information bandwidth Bm ,the mutual
interference power is approximated by
Bm M −1
N= ∑ Pk + N 0 Bm
Bc k = 0, k ≠ i
(5)
Bc
We can see through analysis that the spread-spectrum code speed rate is higher, is
Bm
Bc
bigger and the signal-to-noise is higher. At certain communication quality, is bigger,
Bm
the umber of address to communicate simultaneously is more. therefore if heightening the
gain of spread-spectrum process, the umber of address can increase.
M
'
P P 3 5 7 10
Eb N 0
We can obtain through analysis of table 1: the multi-access interference in CDMA is very
serious. Quite a few power is consumed on the multi-access interference. When the umber
of address exceed a certain bound, the power requirement increase sharply, which worsen
the situation of communication. But to this system proposed in this paper, the umber of
address is fewer, therefore the problem of multi-access interference in CDMA is not
serious.
SUMMARY
The application of CDMA often isn’t attractive only from the capacity point of view, but
CDMA technique cut down spectrum density, has powerful antiinterference property,
provide secrecy of information, and can resist the multipath effect in mobile
communication. It is convenience for CDMA to structure diverse communication network,
and the address can change at the occasion demands. Therefore, the CDMA technique is
very suitable for the system in which umber of mobile targets is few, and the high
reliability (e.g. powerful antiinterference) is demanded.
REFERENCES
ABSTRACT
Prior to launch the decision was made to monitor the EUVE science instrument with staff
controllers at CEA in the same way that other missions were: 24 hours a day, 7 days a
week. We required two people, generally a staff controller and a student controller-aide,
be available in the EUVE Science Operations Center (ESOC) at all times to monitor
telemetry as it arrived.
To reduce EUVE’s operating costs and to increase the mission’s competitiveness, CEA
decided to create an autonomous monitoring system for the payload. A discussion of the
factors that went into this decision can be found in Malina (1994) and Morgan & Malina
(1995).
The transition from fully-staffed to autonomous operations was planned in phases. We had
a long transition phase from fully staffed to single shift operations, then operated in a
single shift mode. We are in the final transition phase to autonomous, or “zero-shift,”
operations. The transition to zero shifts is being carried out in three phases: first, eliminate
all routine human monitoring of the instrument; second, eliminate scheduled weekend and
holiday support; third, eliminate or automate all remaining routine console support tasks.
Applying our experience in the fully staffed model, we summarized everything that the
operators routinely did during their shifts. This produced a list of duties that had to be
addressed in the light of first a single, and then a zero shift model. To begin, all routine
tasks done during the night shift were moved to the day shift. Then, once the rule-based,
automated monitoring software (Eworks) was developed (see section 4.1) and running in
the ESOC, the operators moved out of the ESOC for the night shifts. As problems arose,
the staff returned to the ESOC to investigate. This test period was an important phase in
Eworks development. Bugs were found and fixed, the operators gained confidence in the
software, and, as software issues arose, new software was requested. Once Eworks and
other software were fully tested, and the operations-related nighttime workload was
reduced to responding to pages (via electronic pager) generated by Eworks, a review was
held for groups from UCB and GSFC, and the official transition to single shift operations
was approved.
After completing the transition to single shift operations, we began working toward an
unstaffed, or zero-shift, model. The job descriptions of the controllers had changed, and
they became known as Anomaly Coordinators for the ESOC (ACE). The ACEs again
reassessed their operations-related duties to determine the operational changes necessary
to accomplish zero-shift operations. The transition from human monitoring of the
instrument during the day shift with the automated software was an easy one and required
little development or re-evaluation. We reevaluated what would be required to expand the
unstaffed time from night shifts to weekends. Software was written or expanded to make
this transition. In the early stages of this implementation the ACE was required to log-in on
weekends to verify that everything was working as expected. As software testing
progressed and new software was implemented and confidence increased, the requirement
to log-in on weekends was dropped.
3. Risk Evaluation
A risk evaluation group (REG) formed to discuss the risks of changing over to an
autonomous monitoring system. Risk was divided into two broad categories: (1) not
receiving telemetry, and (2) telemetry indicating a problem with the instrument. The REG
identified the risk areas and considered them in the light of the fully-staffed model as well
as at several different levels of reduced staffing. These results were then used to prioritize
tasks at each step and to aid in focusing the work.
3.1 Receiving Telemetry
Before the REG formed, we realized a risk is always involved in being out of touch with
the instrument. In the fully-staffed model we were routinely out of contact for 70 minutes
of every orbit, and that was an accepted risk. Occasionally problems with ground systems
force us to be out of contact for extended periods. In light of this, a contingency procedure
was written to define what to do when the ESOC is, or expects to be, out of touch with the
payload for an unacceptable length of time. The “20 hour rule” attempted to balance the
risk of an unmonitored payload with the risks of commanding without feedback. The rule
states that if we are out of contact with the payload for more than 20 hours the high
voltage power to the detectors will be turned off. This rule defined the basic risk already
accepted and became the basis for many decisions for the implementation of automation.
Initially, we used two timeout periods to signal that we were not receiving telemetry, one
at 6 hours and another at 16. The shorter period gave ample time to work problems, and
the 16 hour period warned that time was running out and we should be determining when a
realtime contact was available to command the detector high voltage be turned off. Once
we were operating with this system, we found that if the 6 hour timeout expired after the
day shift, little support was available to solve the problem before morning. If it occurred
during working hours, the problem was usually apparent long before the timer tripped. We
decided to eliminate the 6 hour timer, and rely on Eworks solely for emergency notification
that we had been out of touch for too long and needed to plan to turn the instrument off.
All of the risks and problems that could be determined by analyzing the telemetry were
organized into three tiers: concerns critical to immediate instrument health, concerns
non-critical to immediate instrument health, and concerns that effected science data only.
The results from this system formed the criteria for what the expert system needed to do.
Rules were written for Eworks that addressed the critical issues first.
The risks and problems have been revisited as we approached each new phase of the
project to ensure that we have considered their implications with respect to the next phase.
The automation project was divided into three main parts: an expert telemetry monitoring
system (Eworks) (Lewis et al. 1995; Wong et al. 1996), a paging system to automatically
contact the ACE (Lewis et al. 1995), and a system to monitor the ground systems (Abedini
& Malina 1994).
In addition to the systems themselves, the operations staff revamped the procedures that
governed how they operate. Also, the way in which they communicated within the ESOC,
with the rest of CEA, and with GSFC had to change (see Kronberg et al. 1995).
From the time of the decision to move to autonomous operations for night coverage, the
software engineers were given six months to design and implement a system. They
selected RTworks, an AI software package from Talarian Corporation, for the basis of our
expert system, Eworks. They also sought guidance from groups at NASA’s Ames
Research Center and the Jet Propulsion Lab (JPL) with experience in designing and
managing expert systems.
The software engineers designed a system that would provide the ability to do trend
analysis, as well as to monitor the items that the REG had decided were critical to payload
health and safety. They also planned a graphical user interface (GUI) for the controllers to
use in diagnosing problems. They took their design before an internal review board at
CEA. The board made a convincing argument that trend analysis was not an integral part
of assuring health and safety, and it was removed from the plan. It was also decided that
the GUI to be used in problem diagnosis was not critical and would be very
resource-intensive to develop. They decided to develop only a fractional part of the
interface.
Discussions were held with groups from Ames and JPL to discuss how the operators’
knowledge could be transferred to the programmers and then used to generate the
knowledge base rules. None of the methods discussed, including digraphs, decision trees,
and face-to-face conversation, were appealing. In the end, flowcharts were chosen for the
simplicity and general familiarity. The flowcharts were used both to design the knowledge
base and to serve as a reference for the operators on how the knowledge base behaves.
Once the flowcharts were agreed upon by the operators and programmers, they were taken
to REG for verification and approval.
During the period of flowchart design and review, the software engineers worked on low
level code for Eworks, converting our data into a representation that RTworks could use.
They encountered several difficult problems. RTworks was designed to work with a
continuous data stream. Our realtime telemetry is not continuous; there are data dropouts
and corrupted data are received. RTworks also had a problem with the design of our
telemetry stream. All of our engineering monitors share a small number of telemetry slots
and appear in those slots at different rates, varying between monitors, from every frame
(1.024 seconds) to every 128 frames. We needed to know whether or not the instrument
knowledge was current. RTworks passed telemetry values along to the inference engine to
be processed by the rules only when the values changed. If data were missing, the value
did not change. We need to know whether all the instrument knowledge is current. In
RTworks, rules are only queued when values change. No rules are fired if any monitor that
it references is “unknown”. In our implementation, every time a frame arrives its time is
checked to see if there was a gap. If data are missing, we generate a message that sets all
the monitors that were expected in the gap to “unknown”; then processing resumes. Once
ways were found to address all the problems and the rules were implemented, the testing
process began.
Notifying the ACE of a problem detected by Eworks was critical. In order to assure this
notification, we needed a paging system that would page the ACE persistently until
someone responded.
The paging system consists of programs that manage problem files, and user configurable
parameters that control who is paged and how often, in a tiered structure. First the on-call
ACE is paged, (currently every 10 minutes for 3 hours), then, if a timeout period expires
with no response, backup personnel (currently 5 people) are called. Two timeout periods
provide the capability of paging different people at each level. At this writing, a problem
escalating beyond the on-call ACE has only occurred once, when the primary beeper’s
battery was dead. Paging for a given problem stops only when someone acknowledges the
page by logging onto the system and moving the problem file(s) to a different directory.
We decided to improve the reliability of our network, rather than invest a large effort in
automating the monitoring of the various components in the telemetry path. The most
frequent problem resulted from the network locking up when one of the file-servers failed.
The addition of a redundant array of independent disks (RAID) and some reorganization of
the network itself largely alleviated this problem. A workstation isolated as much as
possible from the rest of the network was also added. It monitors whether Eworks has
processed data and has its own modem. If any of the ground systems required to get the
telemetry to CEA fail, or some critical part of our network fails, Eworks will fail to
process data. If Eworks fails to process data, the stand-alone system will notify the ACE
that data have not been processed.
4.4 TESTING
A working version of the expert system was installed on the ESOC network and was run in
parallel with the existing systems. During the initial stages of testing, the software
engineers would come to the ESOC to see how Eworks was running. They checked the
validity of reported problems, and determined how to eliminate “false” problems.
Sometimes this involved meetings between the programmers and controllers to discuss and
change the flowcharts. As the programmers’ confidence in Eworks increased, they also
began testing the paging software in the operations environment. Initially a software
engineer was paged for problems, first during working hours only, and eventually 24 hours
a day. When the number of pages was reduced to a small number the software was
reconfigured to page the operations staff. This changeover occurred while the night shifts
were staffed.
In addition to testing Eworks in normal operations, all of the data from previous anomalies
were “run through” the expert system to verify that it would catch all of the problems
previously experienced. The expert system was also run on data from a variety of
engineering tests to verify that it would recognize the abnormal instrument configuration.
The final test phase was to generate telemetry with problems designed to trigger individual
rules. Once this testing was completed, a presentation was put together for a review panel
from CEA and from GSFC. The transition was approved and we proceeded to single shift
operations as planned.
5. PROBLEMS ENCOUNTERED
The problem of paging frequency and redundancy was recognized as soon as Eworks went
into operation and has yet to be adequately addressed. Eworks generates a problem file
every time one of its rules is triggered. All of the real problems that we have experienced
are continuous in nature and persist until rectified. This means that, in its current version,
Eworks will continue to generate problem files, and therefore pages, each time we receive
telemetry. When a major problem occurs, like the main instrument processor (CDP)
resetting, 20 to 30 problems are generated every time Eworks receives telemetry. Over a
period of hours this can result in the creation of hundreds of problem files, which the ACE
must check to verify that no unrelated problems have occurred. One solution to this
problem would be to increase the complexity of Eworks rules, such that if the rule fired,
indicating a lack of power to the CDP (and therefore the entire instrument was off), no
other rules would be triggered. Another solution might be to keep track of what rules had
been used to generate problems and prevent generating more from the same rule. These
solutions would require major modifications to Eworks, and it is advantageous to keep
Eworks stable. Currently we use a method that we call page screening to reduce the
number of pages. Once a problem is diagnosed, and the ACE determines that it will be
ongoing, future pages that exactly duplicate problems that we have seen can be screened.
When Eworks generates a problem and page screening is on, each new problem is
compared with those being screened, if it matches no new problem file or subsequent page
is generated.
Limit transitions for which no Eworks rules were written are handled in a similar way. For
example, the detector high voltage is ramped down to about half its operating voltage
during orbital day, protecting the detector from high photon counts. The high voltage
violates yellow limits during orbital day and the condition is normal. No Eworks rule
triggers when this voltage is in the yellow range. The voltage could be neither red nor at
either usual value. This situation could be a problem in the zero-shift staffing model. A
new module was written and added to our normal limit-checking package. The module
looks at limit transitions files and compares each transition with a filter file which contains
acceptable ranges for out-of-limits values. The filter file is user-controllable and can be
updated as the situation changes. Software similar to that which checks for Eworks alerts,
checks daily for unexpected limit transitions and generates one page for all of the
unexpected transitions found.
5.3 ASSURING TIMELY RESPONSE TO ANOMALIES
The problem of timely response has been discussed since the inception of autonomous
operations. It is solved by policy and the configurable paging software. An ACE has been
on-call since the moment we stopped staffing the ESOC continuously. One ACE is
designated as the person on-call and the duty rotates. The primary beeper is physically
passed from ACE to ACE, so the phone number to contact the ACE on duty remains
constant. The main phone number for the ESOC is forwarded to a commercial message
center that offers callers the option of paging whoever is on-call. In addition, each ACE
carries a beeper assigned to them individually. A written policy defines the response time
for the on-call person, as well as the time limit for when the paging system changes from
calling only the on-call person to calling everyone. The policy also contains directions
about how to acknowledge that someone is responding. This prevents multiple ACE’s
from heading in to the ESOC if everyone is paged and one person has already responded.
The problem files that generate pages are not moved via remote login unless the ACE is
certain that the problem presents no threat to the instrument. If a serious problem occurs,
the problem file is left active. If something were to prevent the ACE’s arrival at the ESOC,
others are paged when the timeout period expires.
When the ESOC was staffed continuously, we engaged in a ritual called shift-changeover
that required that two people talk to each other about the status. Current information was
also kept, in various forms, in the ESOC. When we stopped continuous staffing the need to
communicate with others was a problem. We had used paper logbooks during full staffing,
and when we started leaving the ESOC, we switched to on-line logging and our
“logbooks” became accessible outside of the ESOC. Other information was placed on-line
when possible, unfortunately some information only comes to us in paper form, and
remains inaccessible except in the ESOC. Part of the ACE policy defines how to hand
problems over from one person to another. Problems that require a continuous presence in
the ESOC are only passed from ACE to ACE in person. Less severe problems are handed
over simply by carefully recording in the logbook what happened, what has been done, and
what remains to be done. Each work day, an ACE is responsible for the instrument and
reads the logbook to determine whether there are problems.
6. CONCLUSION
CEA has accomplished most of the transition from fully-staffed to autonomous operations
for the EUVE science instrument. The transition relied on a phased approach to solve
difficult problems at all stages of planning and development. The crucial decision to
concentrate solely on instrument health and safety enabled us to obtain the goal in a
reasonable period of time. The existence of a software package that permitted use a lot of
existing, working code was also paramount.
The operations group has many projects they would like to see implemented in the future.
That the funding or time will be available to accomplish them is unlikely, but we can
envision: An automated commanding system that would be set in motion by the submission
of a science plan; improvements to the expert system so that it could verify that the
instrument was configured correctly at all times; and a sophisticated network watchdog
system.
ACKNOWLEDGMENTS
REFERENCES
Kronberg, F., Ringrose, P., Losik, L., Biroscak, D., and Malina, R. F. “Re-engineering the
EUVE Payload Operations Information Flow Process to Support Autonomous Monitoring
of Payload Telemetry,” Re-engineering Telemetry, XXXI, 1995, 286%294
Lewis, M. et al. “Lessons Learned from the Introduction of Autonomous Monitoring to the
EUVE Science Operations Center,” 1995 Goddard Conference of Space Applications of
Artificial Intelligence and Emerging Information Technologies,” NASA CP-3296, GSFC,
Greenbelt, MI, 1995, 229%235
Wong, L., Kronberg, F., Hopkins, A., Machi, F., and Eastham, P. “Development and
Deployment of a Rule-Based Expert System for Autonomous Satellite Monitoring,” SAG
#713, Proceedings of Fifth Annual Conference on Astronomical Data Analysis Software
and Systems, Tucson, AZ, October, 1995
RTworks, Talarian Corporation, 444 Castro Street, Suite 140, Mountain View, CA 94041,
(415)965-8050
INEXPENSIVE RATE-1/6 CONVOLUTIONAL DECODER FOR
INTEGRATION AND TEST PURPOSES
ABSTRACT
The Near Earth Asteroid Rendezvous (NEAR) satellite will travel to the asteroid 433 Eros,
arriving there early in 1999, and orbit the asteroid for 1 year taking measurements that will
map the surface features and determine its elemental composition. NEAR is the first
satellite to use the rate-1/6 convolutional encoding on its telemetry downlink. Due to the
scarcity and complexity of full decoders, APL designed and built a less capable but
inexpensive version of the decoder for use in the integration, test, and prelaunch checkout
of the rate-1/6 encoder. This paper describes the rationale for the design, how it works,
and the features that are included.
KEY WORDS
Rate 1/6, constraint length 15, convolutional encoding, convolutional decoding, satellite
communication, satellite telemetry, Near Earth Asteroid Rendezvous, NEAR
INTRODUCTION
The NEAR satellite will travel to the asteroid 433 Eros, arriving there in February 1999. It
will orbit the asteroid for 1 year, taking measurements that will map the surface features
and determine its elemental composition. Eros is 40 X 15 X 13 km and is a type S (silicate
rock) asteroid, which is believed to contain the elemental and mineralogical composition
representative of the inner planets when the solar system was created 4.5 billion years ago.
Because geological evolution has destroyed that environment on Earth and other planets,
studying Eros is a major step in understanding the origin of the solar system. The NEAR
mission will make the first quantitative and comprehensive measurements of an asteroid’s
composition and structure. The measurements have been identified by the National
Academy of Sciences as the most important scientific objectives in the exploration of
primitive bodies. Primary scientific goals of the NEAR mission are to measure the
following properties of Eros:
C Bulk: size, shape, volume, mass, gravity field, and spin state
C Surface: elemental and mineral composition, geology, morphology, and texture
C Internal: mass distribution and magnetic field.
NEAR was launched on February 17, 1996, and will follow the trajectory shown in
Figure 1. In June 1997, NEAR will pass within 1200 km of the asteroid Mathilde and may
gather data on that type C (composite) asteroid. Shortly after that encounter, NEAR will
perform a burn to change its trajectory so that it will fly by the Earth in January 1998. The
Earth flyby will change NEAR’s orbit plane and put it on a trajectory to intercept Eros in
February 1999. NEAR will orbit the asteroid for 1 year taking data from altitudes of 1000,
200, 50, and 35 km. The gravitational sphere of influence from Eros ranges from 400-640
km, and at 50 km the orbital period is 19.9 h. The onboard fuel for station keeping and
maneuvering will allow 1 year’s operation at the asteroid. For more detailed and timely
information on the NEAR project, visit the NEAR home page at
http://hurlbut.jhuapl.edu:80/NEAR/.
NEAR is the first of NASA’s Discovery series of satellites that are to be built according to
the philosophy of “smaller, cheaper, and better.” The Johns Hopkins University Applied
Physics Laboratory (APL) designed NEAR to use 80% solutions instead of grandiose
solutions that could take too long and cost too much. The satellite, shown in Figure 2, was
designed so that it has only a few moving parts. APL carefully designed the orbit trajectory
to ensure that the Earth and Sun would always be visible to the same side of the satellite
throughout the entire mission, eliminating the need for rotating solar panels or antennas.
NASA allowed APL to be the manager for the entire project with the constraints that
launch must occur within a 2-week window beginning February 16, 1996, in order to reach
Eros, and it must cost less than $150 million. Missing the launch window would require
going to another asteroid or a 7-year delay in going to Eros. Missing the cost limit would
have terminated the project. APL estimated that it would cost $112 million (in FY92
dollars) to build, deliver, and prepare the satellite for launch. APL accomplished the
project at a cost of $108.4 million and returned the difference to NASA during a ceremony
to honor the NEAR team on April 15, 1996.
THE NEAR ENCODERS
Mission operations communicate with the spacecraft through the Jet Propulsion
Laboratory’s (JPL) Deep Space Network (DSN) using stations located in Goldstone,
California; Canberra, Australia; and Madrid, Spain. In order to achieve good
communication performance, NEAR included both rate-1/2, k = 7 and rate-1/6, k = 15
convolutional encoders. The rate-1/2 encoder is used routinely in space communications,
but the rate-1/6 is new to this community. NEAR is the first satellite to be launched with
this capability, which gives a 2.11-dB gain over the more conventional code.1 However,
there are no commercial decoders for this particular code, and the only decoders known,
located at JPL DSN test facility and the DSN stations, are called the Big Viterbi Decoders
(BVDs). These decoders are very complex, using trellis decoding to achieve the
maximum-likelihood error-correction performance.2
The encoder hardware for the constraint length 15, rate-1/6 convolutional code utilizes a
shift register to provide delayed samples of the input, modulo-2 summers to generate the
polynomials, and a multiplexer to sequence the six output symbols per data bit. The
generator polynomials are given by the following equations:
where the Gi is one of the six output symbols, x is the input data bit, and D is a unit delay
operator. The G1, G3, and G5 polynomials are inverted (symbol inversion) to provide bit
changes when the input is constantly zero. The generator hardware block diagram is
shown in Figure 3 below.
The decoder design was focused on two system conditions. First, the signal-to-noise ratio
would be very high in the integration and testing environments. Second, the telemetry
contains a 32-bit frame sync word for each 8832-bit frame. Since the probability of a bit
error was small, the probability of receiving an error-free encoded frame sync pattern was
very high. The symbol stream will vary from frame to frame until the first 15 bits of the
sync pattern have flushed the tail of the preceding frame out of the encoder. The next 17
bits of the frame sync pattern will generate 102 symbols (at rate-1/6) that are identical for
each frame. By detecting this pattern, a replica encoder can be synchronized to the
transmitting encoder.
Figure 3. Rate-1/6, k = 15 Convolutional Encoder Block Diagram
Once the replica encoder is in the same state as the transmitting encoder, the decoding
process proceeds in the following cycle. The replica encoder assumes that the next data bit
is a 0 and generates the corresponding six-bit symbol sequence. These six symbols are
compared to the next six received symbols. Because the generator polynomials all contain
the current value of the input, the symbol patterns for an incoming 1 and 0 are
complementary. If the comparison has four or more matches in the six samples, the
assumed 0 is accepted. If there are two or fewer matches, the assumed 0 is rejected and
the bit is stored as a 1. In the case of three matches, an arbitrary decision to accept or
reject the 0 is made. If a wrong selection is made, decoding errors will continue to be
produced until the next frame sync is detected. This cycle is repeated for each succeeding
six-symbol sequence. This simple decoding scheme provides the capability to detect and
correct up to two errors per six-symbol sequence and limits data loss to no more than one
frame in the case of three or more symbol errors once synchronization has been achieved.
Figure 4 shows a functional block diagram of the decoder that implements the algorithm
described in the preceding paragraph. The input data stream is checked for the encoded
frame sync pattern or its complement using a shift register and dual 102 symbol
comparators. A sync pulse is generated whenever the pattern matches. The input clock is
divided by 6 and synchronized to the detected frame sync. Each sequence of six input
symbols is then compared to the output of a replica encoder which assumes an input bit
of 0. The six-symbol comparator determines the number of matching bits between the two
sequences, and outputs a 0 if two or fewer mismatches occurs or a 1 if four or more
mismatches occur. If the case of three mismatches occurs, a 0 will be output. Note that if 0
is the correct bit, the decoder will continue without error, but if 1 was the correct bit, the
decoder will continue to output erroneous data until the next symbol sync pattern occurs.
Whenever 1, 2, 4, or 5 mismatches occur, the frame symbol error counter will be
Figure 4. Simplified Rate-1/6, k = 15 Convolutional Decoder/Encoder Block Diagram
The decoder hardware was fabricated on a 6U high, two-slot wide VME wire-wrap circuit
board (see Figure 5). The board was designed to accommodate single-ended or differential
inputs and outputs via BNC connectors on the front panel or via the P2 VME bus
connector. A selection is available to invert the decoder input clock, decoder input symbol,
decoder output data, and encoder input clock streams, whereas the decoder output clock
has a selectable phase (one of six). The decoder input clock has a selectable delay of 0-7
Fsec. The encoder data output can be either NRZ-L or biphase-L encoded (switch
selectable). The decoder automatically detects an inverted sync pattern and reinverts the
input stream prior to decoding. The four-digit error counters have selectable reset options:
on frame sync, manual, VME bus RESET*, or VME bus access. A functional bypass
capability is provided to accommodate situations when the k = 15, rate-1/6 convolutional
code is not in operation. The bypass signal controls both the encoder and decoder
Figure 5. Decoder/Encoder Board
operation. All front panel operator controls and displays are duplicated in VME bus
memory for software control. The VME bus interface is an A16:D08(O) slave, occupying
a selectable 64-byte block in the short address space.
The NEAR team established specifications for two identical ground systems: one for use
by integration and test, and the other for mission operations. One manufacturer, Integral
Systems Inc., was selected to provide a common system, the ISI Epoch 2000, for both
phases of the project. After the satellite was launched, the set used by the integration and
test team was returned to APL for use as the backup mission operations system.
The purchased system is a networked, multiworkstation system that has a VME chassis
with unique interface functions. Our rate-1/6 decoder was designed to be installed into the
VME chassis and can be operated from switches on the front panel or from commands
sent out over the VME bus from the computer in the chassis. As explained in the following
section, designing the decoder for the VME chassis allowed one design to be applied in
the verification of three subsystems of the ground system.
The ground system designers identified three customers for a rate-1/6 decoder board. The
radio frequency (RF) subsystem designers required a decoder board to verify that their
communication links were working, the command and data handling (C&DH) subsystem
designers required one to verify the operation of the rate-1/6 encoder hardware, and the
ground system required two boards for use during integration and test. These three
subsystems all use VME bus chassis, but run three different operating systems based on
two different microprocessor CPU boards. A hardware solution was selected over a
software solution so that one design could be used for all three subsystems.
Performance of the satellite RF subsystem using the rate-1/6 encoder at marginal signal
levels was verified during the design phase by the RF designers testing their systems at
JPL using the BVD. At the same time, they also verified the logic of our decoder by
comparing it to the BVD. Once the operation at marginal signal levels had been verified,
the RF subsystem could be operated in a high signal-to-noise environment during test and
integration.
The C&DH designers also needed to verify the flight hardware rate-1/6 encoder logic in a
benign environment. A rate-1/6 decoder board installed in the C&DH bench test
equipment suite also allowed the testing of the encoding selection commands.
The ground system designers needed to verify the correct operation of the rate-1/6 encoder
on the satellite during test and integration, environmental testing, and prelaunch testing
without a BVD. The objective during these phases is to ensure that the satellite hardware
performs correctly as it is exposed to various environmental stresses and that nothing fails
prior to launch.
CONCLUSION
The development of a simplified decoding algorthm on a VME bus card allowed the
NEAR team to bench test subsystems and integrate the spacecraft in a timely and
cost-efficient manner. The simple decoding algorthm was robust enough (an 80% solution)
for the high signal-to-noise environments where it was used.
REFERENCES
1. Yuen, J. H. and Vo, Q. D., “In Search of a 2-dB Coding Gain,” TDA Progress Report
42-83, pp. 26-33, Jet Propulsion Laboratory, Pasadena, CA, July-September 1985.
2. Yuen, J., Divsalar, D., Kinman, P., and Simon, M., “Telemetry System,” Deep Space
Communications Systems Engineering, Plenum Press, New York, October 1983,
pp. 327-332.
DOCUMENT RETRIEVAL TRIGGERED BY SPACECRAFT
ANOMALY: USING THE KOLODNER CASE-BASED
REASONING (CBR) PARADIGM TO DESIGN A
FAULT-INDUCED RESPONSE SYSTEM
ABSTRACT
1
Research Engineer, Omitron, 6411 Ivy Lane, Suite 600 Greenbelt, MD 20770.
2
AI Group, NASA/Cal Tech Jet Propulsion Lab, 4800 Oak Grove, Pasadena, CA 91109.
3
UC Berkeley undergraduate student.
4
Also Director, Laboratoire d’Astronomie Spatiale, Traverse du Syphon BP8, 13376
Marseille, France Cedex 12.
INTRODUCTION
In February of 1995, the Extreme Ultraviolet Explorer Science Operations Center (ESOC)
moved to single-shift operations. Artificial Intelligence (AI) software monitors the
telemetry in order to reduce staffing needs and improve monitoring of the health and safety
of the satellite. Two of the human-monitored shifts were replaced by AI software that
pages personnel when an anomaly is detected. The dynamics of building an anomaly
response team under this new paradigm have changed since staff must be called in, and
responses are inherently slower. In any anomalous situation, response time is always
critical. Delays in responding to an anomaly may jeopardize science quality and
observatory health.
The Fault Induced Document Retrieval Officer (FIDO) prototype software reduces the
need for a large and costly response team. FIDO research initially focuses on the design
and development of software that will provide a system state summary (a configurable
engineering summary that displays the state of the ground systems and observatory) and
documentation for diagnosis of a potentially failing component that might have caused the
error message or anomaly. These features are complemented by a high-level
documentation browsing interface indexed to the particular error message. The document
collection consists of operations procedures, engineering problem reports, component
documentation, and engineering drawings. The prototype design includes the capability of
combining information on the state of the ground systems, with detailed, hierarchically
organized, hypertext-enabled documentation.
FIDO will capture the knowledge of operators in two ways. Initially, when FIDO first
retrieves documents for a particular error message or anomaly, the operator will be able to
designate which documents he or she considers most relevant. FIDO will archive this
scoring of the documents and present them accordingly when the error or anomaly occurs
again. Because the documents will be scored by operators when errors occur, the most
relevant documents would become the first to be displayed during subsequent events. If
the operator creates a new document in response to a particular error or anomaly, FIDO
will save the new document and present it the next time that the problem appears. A
successful implementation of this prototype would decrease the response time and level of
expertise needed to respond to errors, thereby increasing science productivity and reducing
overall operating costs.
In the process of developing the “zero shift” capability at CEA, we discussed the need for
computer-mediated or -assisted documentation selection and display for operations during
the times without on site operators.
Experience teaches us that ground systems failures are much more common than failures of
the spacecraft platform or payload components. Responses to ground systems failures
occupy a majority of operators’ time and attention. Thus, ground systems become the most
pertinent domain for applications of FIDO.
The EUVE research regarding the delivery of complex sets of documentation to operators
engaged in real-time activities involving ground and spacecraft systems relates to
Feigenbaum’s concept of “knowledge publishing” (Feigenbaum 1993).1 Knowledge
publishing, Feigenbaum reports, is the conversion of normal, passive books into a system
that delivers knowledge to the user in “active books,” specifically in the context of a user’s
need or request. FIDO will “make active” the documentation used by operators during
responses to errors and anomalies. FIDO is a knowledge system because it captures the
knowledge of the operators.
Another aspect of our work is based on the research of Kolodner, which focuses on the use
of previous experience as a learning resource for responding to subsequent anomalous
events (Kolodner 1993). “Case-based reasoning” (CBR) is defined as a model that
incorporates problem solving, problem understanding, and problem learning. CBR
maintains a referential “case base” of earlier events and solutions.
1
Edward A. Feigenbaum, in a video-taped lecture, “Tiger in a Cage: Applications of
Knowledge-Based Systems” (Feigenbaum 1993) discusses various applications of knowledge-
based systems. After a review of the history of knowledge systems—artificial intelligence using
heuristic searching, problem-solving methods, and knowledge representation, Feigenbaum
discusses four major areas of knowledge systems. Of the four systems, planning and scheduling,
compliance or “rule helpers” configuration application and knowledge publishing, FIDO is most
relevant to knowledge publishing but also has some relationship to configuration application,
knowledge systems.
Kolodner bases her work on five premises. References to old cases are advantageous, and
references to similar situations are necessary. Understanding or interpreting a situation is a
necessary part of the reasoning cycle. Old cases need to be adapted to new situations.
Learning occurs as a natural consequence of reasoning. Learning results in: learning of
new procedures, refining of procedures, and learning when each is appropriately used.
FIDO follows Kolodner’s model to the extent that it incorporates previous search results
into its current search criteria, however, the current design of FIDO does not yet promote
the “understanding” of problems as they occur. FIDO supports two of Kolodner’s
premises (case reference and case adaptation), but does not support the premises regarding
“learning.”
According to Kolodner, these premises suggest that the quality of an instance of case-
based reasoning depends on five criteria: experiences it has had, ability to understand new
situations in terms of old experiences, adeptness at adaptation, adeptness at evaluating and
repair, and ability to appropriately integrate new experiences into existing memory.
Of Kolodner’s five criteria that influence the “quality” of case-based reasoning, we believe
that FIDO: will generate a body of experience as it is employed over time; may be able to
understand new situations in terms of previous ones; at present, can adapt only manually to
new situations; has some ability to evaluate and change its response; and can partially
integrate new experiences into memory.
Weiner et al. (1995) report on research using “case-based reasoning as a technique for
storing fault management experience and making it available to operators confronting
similar anomalous situations.”
Weiner et al. suggest three trends motivate the use of CBR to aid anomaly resolution:
increasing levels of automation, reduced budgets, and higher levels of personnel turnover.
Our experience confirms this theory (Kronberg 1995).
The success of the proof of concept FIXIT application for satellite ground control suggests
that a case-based fault management tool can be very effective in aiding operators
performing fault management. FIXIT’s success indicates that a FIXIT-like architecture
describes a useful way to present fault management information to spacecraft operators.
Furthermore, the FIXIT research indicates that providing operators with realtime access to
information about previous system faults, coupled with tailoring the organization of this
information to match the operator’s fault management strategies, can lead to more effective
fault management.
FIDO: DESIGN
We discussed the issue of what would be useful to an operator who is called in to respond
to a possible anomaly or critical situation. We reviewed a number of knowledge systems.
We compared the requirements with programs elsewhere. We determined the following
preliminary requirements for FIDO:
Based on the requirements determined from the discussions with users, we chose the
following features and capabilities:
SYSTEM STATE AND SUMMARY WINDOW. The engineering state and summary window
is a graphical representation of system state information. The window consists of a Java
applet to be displayed in a Web browser, and receives input from the telemetry stream.
This window displays immediate information about a fault and gives an operator the ability
to view additional state information that is not a part of the fault display but may be
connected up or down stream. This capability was demonstrated in FIDO v1.0.
DICTIONARY / THESAURUS FILE. The dictionary/thesaurus file lists keywords that are
mapped to ground system error messages from the host system. This file is formatted as an
ASCII hash-table with each system error being linked to a primary keyword as well as a
number of secondary keywords.
RULE BASE / INFERENCE ENGINE. The rule base/inference engine will not be
implemented in FIDO version 1.5. The implementation decision awaits the results of the
operational evaluation of FIDO v1.5, and is dependent upon the ability of the existing
FIDO architecture to supply operators with relevant documentation. If FIDO requires a
more complex knowledge structure, this will be implemented using the SHINE inference
engine developed at Jet Propulsion Laboratory.
SEARCH ENGINE / INDEXER. FIDO utilizes the Glimpse search engine, developed at the
University of Arizona, to index and retrieve documentation. Glimpse is a very powerful
indexing and query system that allows whole files system to be searched extremely
quickly. Glimpse is very similar to the ‘grep’ program familiar to most UNIX users, but it
has several additional features that facilitate its use in FIDO. Glimpse is easily
customizable, allowing for many different types of layered and combined queries, and also
has an excellent Web interface.
GRAPHICAL USER INTERFACE. FIDO will make use of a Web browser for its graphical
user interface. This provides a cross-platform standard for developing FIDO’s GUI (i.e.,
HTML) and allows FIDO to be accessed remotely from any number of platforms. FIDO’s
GUI consists of three main parts: a document browser, a number of document editors, and
a tree display. The engineering state and summary window will also be displayed using the
Web browser interface.
EDITORS. The rule editor consists of a window which allows for generation of entries in
the Dictionary/Thesaurus file and for existing entries to be modified and saved. The rule
editor will take the form of an HTML page using forms. The procedure editor consists of a
window which allows new procedures to be written and existing procedures to be
modified and saved. The procedure editor will take the form of an HTML page using
forms and a standard HTML text widget. Procedures will be stored as HTML; although
the introduction of more powerful HTML editors (e.g., Netscape Gold) may allow for
more comprehensive editing capabilities in future releases. The existing problem reporting
system will be utilized for the initial release of FIDO.
TREE DISPLAY. The tree display is being developed in the form of a Java applet. The tree
display will show a breakdown of the ground system by component and by function. At the
leaf nodes of the tree will be Procedures, which will also be linked to individual problem
reports. We have yet to determine how to store the structure of the tree, either as a string
consisting of parent-child nodes or as representative of file directory structure.
FIDO PROCESS. In the first step of the FIDO diagnosis process, an element of the ground
system generates an error. Whenever a ground system error occurs, an error message
appends to the /var/log/esoc file. A shell script running as a cron job, fido_watch, checks
the /var/log/esoc file for new entries. When a new error message is detected, fido_watch
retrieves the error message from the /var/log/esoc file and finds its associated keywords in
the Dictionary/Thesaurus file. Fido_watch then initiates multiple separate Glimpse
searches using these keywords.
The first Glimpse search is based on the primary keywork linked to the error message. The
second Glimpse search again uses the primary keyword, but allows for a single misspelling
when looking for relevant files. This is a useful technique for getting around the problem of
over- under-specified keywords or procedure entries. Without this search, it is conceivable
that a search for “tape_drive_3” would miss an entry generically related to tape drives that
refers to “tape_drive_N”. All additional searches are performed using the secondary
keywords from the Dictionary/Thesaurus file.
The output of the Glimpse searches is piped to a cgi script, which produces an HTML
page with links to the referenced documents. In this way, the operator has access to all of
the documentation associated with a particular keyword, with files listed according to their
perceived relevance to the anomaly. Future versions of FIDO will make extensive use of
the SHINE inference engine being developed at the Jet Propulsion Laboratory by James
(1994).
At this point, the operator can access all of the available operational documentation in the
ESOC, including procedures, problem reports, and additional documentation such as
schematics and strip charts. FIDO also provides graphical tools for browsing,
bookmarking, and editing this store of documentation. Bookmarks are used to supply
FIDO with feedback about the relevancy of particular documents, thus even further
reducing response time if the anomaly were ever to recur.
LESSONS LEARNED
Our work has resulted in several lessons that have guided significant application and
design decisions. From FIDO 1.0 to FIDO 2.0 we decided that the spacecraft was too well
behaved to warrant this type of knowledge system, accordingly we changed our focus to
ground systems, which proves a much more relevant domain. From the work of Weiner et
al., we realized the critical role of user confidence level and the necessity to facilitate the
ease of use, so we implemented relevant features, e.g.: do not make user cut and paste
between windows and allow the user to add to and change procedures in realtime. This
work is supported by NASA contract NAS5-29298 and NASA/Ames grant NCC2-902.
REFERENCES
James, Mark L. SHINE 2.0 Reference Manual, Jet Propulsion Laboratory, NASA, 1994.
Kolodner, J. L. Case Based Reasoning, Morgan Kaufmann Publishers, San Mateo, CA,
1993.
Kronberg, R, Ringrose, P., Losik, L., Biroscak, D., and Malina, R. F. “Re-engineering the
EUVE Payload Operations Information Flow Process to Support Autonomous Monitoring
of Payload Telemetry,” Reengineering Telemetry, XXXI, 1995, 286-294.
ABSTRACT
A ground based intelligent agent and operations network is being created to handle all
aspects of spacecraft command and control. This system will have the dual purpose of
enabling cost efficient operation of a number of small satellites and serving as a flexible
testbed for the validation of space system autonomy strategies. The system is currently
being targeted to include over a dozen globally distributed amateur radio ground stations
and access to nearly ten spacecraft. The use of distributed computing systems and virtual
interaction schemes are significantly contributing to the creation of this system. The
Internet is used to link the network’s control centers and ground stations. In addition, a
World Wide Web (WWW) based user and operator interface is being developed to permit
high level goal specification of spacecraft experiments and actions. This paper will
describe the operating network being developed, the use of the Internet as an integral part
of the system’s architecture, the design of the WWW interface, and the future development
of the system.
KEY WORDS
INTRODUCTION
*
Doctoral Candidate, Stanford University. Caelum Research Corporation, NASA Ames
Research Center, CA.
†
Engineers Candidate, Stanford University.
Providing advanced user interfaces to the users and operators of these systems is one
particular technique for enhancing a system through the use of automation. Such interfaces
can be used to permit convenient specification and delivery of services to external
customers. Similarly, well designed interfaces for system status and control can assist
operators in judging the efficiency of service provision, the health and status of the system,
the location and cause of faults, the availability and demand for resources, etc.
Advances in Internet technology and tools provides a low-cost and ubiquitous opportunity
for providing user interface resources in distributed systems. Internet connectivity can
provide the communications backbone for these interfaces. The graphical nature and
general familiarity of the WWW adds significantly to the ease of use of such interfaces.
Finally, the ability to provide automated multimedia attributes such as animations, sound,
and three dimensional visualization constitutes a powerful set of tools for creating easy-to-
use, high level, interactive system interfaces.
The Satellite Quick Research Testbed (SQUIRT) Microsatellite Program - The SSDL
SQUIRT program is a yearly project through which students design and fabricate a real
spacecraft capable of servicing low mass, low power, state-of-the-art research payloads.1
By limiting the design scope of these satellites, the project is simple and short enough so
that students can benefit educationally by seeing a full project life cycle and by being able
to technically understand the entire system. Typical design guidelines for these projects
include using a highly modular bus weighing 25 pounds, a hexagonal form that is 9 inches
high by 16 inches in diameter, amateur radio communications frequencies, and commercial
off-the-shelf components. Missions are limited to about one year of on-orbit operation.
Because little money is available for mission operations, a highly automated mission
control architecture is being developed.
The Automated Space System Experimental Testbed (ASSET) System - The ASSET
system is a global space operations network under development within SSDL. This first
goal of this system is to enable low-cost and highly accessible mission operations for
SQUIRT microsatellites as well as other university and amateur spacecraft. The second
goal of this system is to serve as a comprehensive, low inertia, flexible, real world
validation testbed for new automated operations technologies. Figure 1 shows a high level
view of the ASSET mission architecture. The basic components include the user interface,
a control center, ground stations, communications links, and the target spacecraft. During
the current development phase, a highly centralized operations strategy is being pursued
with nearly all mission management decision making executed in the control center. These
tasks include experimental specification, resource allocation throughout the ground and
space segment, fault management, contact planning, data formatting and distribution, and
executive control.2
The design of the ASSET user interface is one of the current projects being pursued. The
functions of this interface include accepting experimental parameters from users, providing
experiment constraint checking, loading parameters into the system database, notifying
users on the status of experiments and system status, and making data available for display
and download. In performing these functions, three primary design objectives are being
targeted.
First, the interface should permit decoupled user interaction with the mission control center
with respect to time and space. From a historical perspective, spacecraft mission
operations have often required collocated and/or synchronized interaction for users and
operators due to a required level of personal interaction, the use of specialized equipment,
security considerations, or other applicable reasons. Automation of many other mission
management tasks, such as resource scheduling, is relaxing the requirement for such
coupling. The ASSET system meets this design requirement through the use of the WWW
and automated request processing. This feature should increase the convenience and
accessibility for system users.
Second, the interface is being developed to interact with the user at a level appropriate to
their experience. To implement specific user requests, the system must have execution
level knowledge of the desired task. The ASSET system permits experiments to be
specified at this level. The generation of this execution level knowledge, however, may not
be of interest to or within the capability of the user. For some users, a very high level
description of the task may be more suitable. As an example, to broadcast a message from
the SAPPHIRE spacecraft, the vehicle requires the date and time of transmission. Most
users, however, think of specifying the experiment in terms of who or what area they’d
like to have hear the message. The ASSET interface compensates for this by permitting
users to click on a world map at the target location for the broadcast. Using orbital
dynamic models, the automated system then converts the map location to a suitable date
and time that will be used in the execution level command sent to the spacecraft. In this
way, users with little knowledge or inclination can easily use the system; in addition,
experienced users will still retain the ability to control low level activity in the system.
Third, the interface seeks to guide the user to a request that meets the stated objectives
while being as general as possible. As an example, consider the alternative methods of
specifying the previously described SAPPHIRE broadcast function. If, for instance, that
the user wants somebody in Washington, D. C. to hear a message anytime during the next
week. At the low level, the user calculates one particular time and date when this can be
achieved and submits an execution level command. At the high level, the user clicks on the
map and adds a deadline; this submission generates a number of applicable times and dates
that meet the overall goal of the user. Clearly, the later method is preferable since this
lends flexibility to the system. Fundamentally, the nature of goal level direction helps to
prevent the generation of overconstraints that ultimately prevent system optimization.
To meet these design objectives, the ASSET system is being developed with a World
Wide Web (WWW) based interface capable of high level interaction. Use of the Internet
provides a widely available, convenient, low cost communications link between users and
mission control. Goal level design of the interface allows 1) simple and general
experimental requests And 2) ease of use for the operator.
The ASSET user interface is hosted on an existing web server which serves the entire
Department of Aeronautics and Astronautics at Stanford. The server, a Sun SPARC 5
running httpd software, is capable of executing programs through the Common Gateway
Interface (CGI) standard. The language used to implement the user interface is Perl. Perl
features powerful string manipulation capability, and is therefore ideally suited for
processing text input from the user as well as processing output from satellites linked to
the ASSET system. The user interface is built following the traditional client-server
architecture, shown in Figure 2. Server security issues have not been explored at this stage.
User Input - The input segment of the ASSET interface consists of a set of hypertext
markup language (HTML) web pages and their associated Perl scripts. The user specifies
an experiment request at several different levels, in a progression from general input (e.g.
name, address, choice of experiment) to specific input detailing the user’s request. As
shown in Figure 3, this results in a tree structure of web pages, through which the user
progresses from root (top-level) to leaf (specific request). At each stage in the input
procedure, the user fills out an HTML form, the contents of which are processed by that
page’s associated Perl script. Simple error checking is performed on all inputs and the user
is served back the appropriate web page, depending on the outcome. If the input passes the
initial error screening, the data is appended to a temporary text file in a standardized
format suitable for later use by the ASSET system. Should the user have made an error, a
dynamically generated error message is served back with information on how to correct
the error and links to help files.
One of the challenges with the tree structure for HTML pages and Perl scripts is the need
to track many simultaneous users without mixing their inputs. There is no consistent means
of identifying users in hypertext transfer protocol (HTTP), so a solution needs to be
devised in order to achieve this. Identification is often accomplished through the use of a
registration giving each user a separate name and password. In the case of the ASSET
interface, it was decided to use a simpler solution: every user beginning a request is
Figure 2 - Client-server architecture Figure 3 - Tree input structure
assigned a unique identifier (UID). The UID is actually the server’s process identification
number for that particular HTTP connection, and is therefore unique for each simultaneous
user. The UID is passed along everywhere the user goes, hidden in the HTML code.
Whenever an HTML page is served back to the user, a Perl script dynamically inserts the
assigned UID into every form (as a “hidden” text input) and every uniform resource
locator (URL) (as a query string). The next time the user follows a link or submits a form,
he/she is uniquely identified and can thus be tracked reliably through a request without
regard to other simultaneous users.
A typical user’s interaction with ASSET’s input interface might go as follows. At first the
user is greeted with the top level web page, shown in a screen shot in Figure 4. In this
example, the user has entered her name and address, selected a satellite telemetry dump
for her experiment, and specified “any” in the satellite choice menu. When the form is
submitted, its associated Perl script writes this information to a new temporary file and
serves back a page which contains a form to be used for specifying a telemetry request. In
this case, the form served back, shown in Figure 5, concerns the SAPPHIRE satellite and
prompts the user for dates and times and just what is to be sampled. Once the user has
submitted this data, it is appended to the appropriate temporary file and a thank you
message is served back. The interface is designed to forward this request to the ASSET
database for scheduling and execution.
User Output - The output segment of the ASSET interface serves experiment results back
to the user, and allows browsing of data already archived in the ASSET database. Upon
arrival, raw satellite data files are pre-processed and stored in the database. The user is
automatically notified and directed to the results of her particular experiment. Different
telemetry subsets can be dynamically generated from the masterframe and served to the
user using Perl scripts as shown in Figure 6. These scripts not only present the data in a
clear and organized fashion but are capable of performing operations on the data such as
converting raw values into meaningful units and performing statistical calculations.
Figure 4 - Selection of an experiment Figure 5 - Specification of experimental parameters
The output interface is also designed to allow high-level goal specified browsing of the
experiment database. For example, if a user would like to see the typical behavior of a
particular satellite as it goes into eclipse, telemetry sets that exhibit this feature can be
extracted from the database. Long-term trend analysis can be accomplished as well,
through automated parsing of a large number of past data files.
Given the use of the WWW based interface, the telemetry display system for the ASSET
project is experimenting with the use of the Virtual Reality Modeling Language, a new
three dimensional modeling specification compliant with many of the advanced Web
browsers. Through the use of this language and previously described techniques such as
CGI scripting and animation, three dimensional representations of the spacecraft, its
environment, and its components can be generated and used for health analysis. As an
example, Figure 4 shows a tray level view of the SAPPHIRE spacecraft; in this particular
display, the color of each box represents the temperature of the component. Other displays
are being developed to show the vehicle’s attitude, orbital position, environmental
conditions, component status, system schematics, etc.
FUTURE WORK
Development of the ASSET system is an ongoing research activity within the SSDL. Work
is being performed in all functional areas in order to incorporate more advanced and cost-
efficient capabilities. These capabilities will enable the system to include more users,
missions, spacecraft, and ground stations within its architecture. Improvements to the user
interface include upgrading its goal level experimental specification capability, providing
an easy access to the database of spacecraft data, and enhancing the content and display of
spacecraft health information.
The work performed to date has attracted significant attention and has laid the groundwork
for a variety of initiatives with other agencies and universities. First, as part of the NASA
New Millennium Program (NMP), much of the user interface work performed on the
ASSET system is providing technology feedback to NMP’s autonomy development team.
Second, as part of the U. S. Air Force Satellite Control Network’s (AFSCN) new
commercialization initiatives, SSDL will attempt to apply its interface strategies to the
AFSCN mission architecture in order to enhance the provision of services to the
commercial sector. Finally, SSDL hopes to build upon previous cooperative activities with
the University of Umea in Sweden in order to create operator interface software that will
enable telepresence operation of remotely stationed amateur radio ground stations.
CONCLUSION
Advanced command and telemetry interfaces have proven to be useful in the operation
devices and the provision of services due to their ability to provide convenient interaction
at a level appropriate to the user. This is especially true in the field of spacecraft
operations due to the distributed nature and complexity of the systems involved. Design of
the ASSET user interface is showing how high level, automated, multimedia, Internet
based interfaces can be used to enhance the operation of these systems for both customers
and system operators. This work is directly contributing to the space industry and is paving
the way to achieving NASA’s goal of providing ways for the general public to conduct
science from their desktop.
ACKNOWLEDGMENTS
The authors wish to acknowledge the outstanding work and commitment of other members
of the current ASSET design team: Mike Swartwout, Andrew Miller, and Tim Caro-Bruce.
In addition, special thanks is given to NASA’s Jet Propulsion Laboratory and Ames
Research Center for their support and assistance with this research project. Finally,
appreciation is extended to SSDL’s SAPPHIRE design team as well as Weber State’s
WeberSat operations team for their willingness to participate in, contribute to, and provide
feedback concerning various aspects of the ASSET system.
REFERENCES
[1] Kitts, Christopher A., and Robert J. Twiggs, “The Satellite Quick Research Testbed
(SQUIRT) Program.” In Proceedings of the 8th Annual AIAA/USU Conference on
Small Satellites in Logan, Utah, September 16-22, 1994, by the Utah State
University. Logan, UT: Utah State University, 1994, 80-83.
[2] Kitts, Christopher A., “A Global Spacecraft Control Network for Spacecraft Autonomy
Research.” Pending publication in the Proceedings of Spaceops '96: The Fourth
International Symposium on Space Mission Operations and Ground Data Systems in
Munich, Germany, September 16-20, 1996, by the Electronic Publishing System.
Munich, Germany: German Aerospace Research Establishment, 1996.
[3] Net.Genesis, and Devra Hall. Build a Web Site. Rocklin, CA: Prima Publishing, 1995.
[4] Schwartz, Randal L. Learning Perl. Sebastopol, CA: O’Reilly & Associates, Inc.,
1993.
[5] Pesce, Mark. VRML: Browsing & Building Cyberspace. Indianapolis, IN: New Riders
Publishing, 1995.
AUTHOR INDEX