Unmanned Aerial Vehicles Uav and Drones
Unmanned Aerial Vehicles Uav and Drones
and Drones
Unmanned Aerial Vehicles
(UAV) and Drones
Edited by:
Zoran Gacovski
ARCLER
P r e s s
www.arclerpress.com
Unmanned Aerial Vehicles (UAV) and Drones
Zoran Gacovski
Arcler Press
224 Shoreacres Road
Burlington, ON L7L 2H2
Canada
www.arclerpress.com
Email: [email protected]
This book contains information obtained from highly regarded resources. Reprinted
material sources are indicated. Copyright for individual articles remains with the au-
thors as indicated and published under Creative Commons License. A Wide variety of
references are listed. Reasonable efforts have been made to publish reliable data and
views articulated in the chapters are those of the individual contributors, and not neces-
sarily those of the editors or publishers. Editors or publishers are not responsible for
the accuracy of the information in the published chapters or consequences of their use.
The publisher assumes no responsibility for any damage or grievance to the persons or
property arising out of the use of any materials, instructions, methods or thoughts in the
book. The editors and the publisher have attempted to trace the copyright holders of all
material reproduced in this publication and apologize to copyright holders if permission
has not been obtained. If any copyright holder has not been acknowledged, please write
to us so we may rectify.
Notice: Registered trademark of products or corporate names are used only for explana-
tion and identification without intent of infringement.
Arcler Press publishes wide variety of books and eBooks. For more information about
Arcler Press and its products, visit our website at www.arclerpress.com
DECLARATION
Some content or chapters in this book are open access copyright free
published research work, which is published under Creative Commons
License and are indicated with the citation. We are thankful to the
publishers and authors of the content and chapters as without them this
book wouldn’t have been possible.
ABOUT THE EDITOR
Dr. Zoran Gacovski has earned his PhD degree at Faculty of Electrical
engineering, Skopje. His research interests include Intelligent systems and
Software engineering, fuzzy systems, graphical models (Petri, Neural and
Bayesian networks), and IT security. He has published over 50 journal and
conference papers, and he has been reviewer of renowned Journals. Currently,
he is a professor in Computer Engineering at European University, Skopje,
Macedonia.
TABLE OF CONTENTS
List of Contributors........................................................................................xv
List of Abbreviations................................................................................... xxiii
Preface.................................................................................................... ....xxv
x
Chapter 7 Contour Based Path Planning with B-Spline Trajectory Generation for
Unmanned Aerial Vehicles (UAVs) over Hostile Terrain........................ 141
Abstract ................................................................................................. 141
Introduction ........................................................................................... 142
Modelling............................................................................................... 144
Problem Formulation ............................................................................. 149
Generation Of Nodes ............................................................................ 151
Implementation ..................................................................................... 151
Conclusions And Future Works .............................................................. 158
Acknowledgements ............................................................................... 158
References.............................................................................................. 159
xi
Chapter 10 Dubins Waypoint, Navigation of
Small-Class Unmanned Aerial Vehicles.................................................. 209
Abstract.................................................................................................. 209
Introduction............................................................................................ 210
Method................................................................................................... 212
Results.................................................................................................... 220
Conclusion............................................................................................. 223
Acknowledgements................................................................................ 224
Nomenclature......................................................................................... 224
Conflicts Of Interest................................................................................ 225
References.............................................................................................. 226
xii
Chapter 13 Visual Flight Control of a Quadrotor
Using Bioinspired Motion Detector........................................................ 295
Abstract.................................................................................................. 296
Introduction............................................................................................ 296
Bioinspired Image Processing................................................................. 298
Multicamera 3D Pose Estimation............................................................ 300
Controller............................................................................................... 302
Experiments And Results......................................................................... 304
Conclusions And Future Works............................................................... 310
Acknowledgments.................................................................................. 310
References.............................................................................................. 311
xiii
References.............................................................................................. 341
xiv
Acknowledgements................................................................................ 395
Conflicts of Interest................................................................................. 395
References.............................................................................................. 396
Index...................................................................................................... 399
xv
LIST OF CONTRIBUTORS
Paulo Carvalhal
University of Minho Portugal
Cristina Santos
University of Minho Portugal
Manuel Ferreira
University of Minho Portugal
Luís Silva
University of Minho Portugal
José Afonso
University of Minho Portugal
Andrew Zulu
Department of Mechanical and Marine Engineering, Polytechnic of Namibia, Windhoek,
Namibia
Samuel John
Department of Mechanical and Marine Engineering, Polytechnic of Namibia, Windhoek,
Namibia
Larry M. Silverberg
Mechanical and Aerospace Engineering, North Carolina State University, Raleigh, USA
Chad Bieber
Mechanical and Aerospace Engineering, North Carolina State University, Raleigh, USA
Gordon Ononiwu
Department of Electrical/Electronic Engineering, Federal University of Technology,
Owerri, Nigeria
Ondoma Onojo
Department of Electrical/Electronic Engineering, Federal University of Technology,
Owerri, Nigeria
xvii
Oliver Ozioko
Department of Electrical/Electronic Engineering, Federal University of Technology,
Owerri, Nigeria
Onyebuchi Nosiri
Department of Electrical/Electronic Engineering, Federal University of Technology,
Owerri, Nigeria
Antonio Barrientos
Universidad Politécnica de Madrid – (Robotics and Cybernetics Group), Spain
Pedro Gutiérrez
Universidad Politécnica de Madrid – (Robotics and Cybernetics Group), Spain
Julián Colorado
Universidad Politécnica de Madrid – (Robotics and Cybernetics Group), Spain
Jaime del-Cerro
Universidad Politécnica de Madrid – Robotics and Cybernetics Group, Spain
Alexander Martínez
Universidad Politécnica de Madrid – Robotics and Cybernetics Group, Spain
Ee-May Kan
Nanyang Technological University, Singapore City, Singapore
Meng-Hiot Lim
Nanyang Technological University, Singapore City, Singapore
Swee-Ping Yeo
National University of Singapore, Singapore City, Singapore
Jiun-Sien Ho
Temasek Polytechnic, Singapore City, Singapore
Zhenhai Shao
University of Electronic Science Technology of China, Chengdu, China.
Lifeng Wang
Field Bus Technology & Automation Lab, North China University of Technology,
Beijing, China
xviii
Yichong He
Field Bus Technology & Automation Lab, North China University of Technology,
Beijing, China
Zhixiang Zhang
Field Bus Technology & Automation Lab, North China University of Technology,
Beijing, China
Congkui He
Field Bus Technology & Automation Lab, North China University of Technology,
Beijing, China
Travis Dierks
Missouri University of Science and Technology, United States of America
S. Jagannathan
Missouri University of Science and Technology, United States of America
Dahan Xu
Mechanical and Aerospace Engineering, North Carolina State University, Raleigh, USA
F. Aznar
Department of Computer Science and Artificial Intelligence, University of Alicante,
03090 Alicante, Spain
M. Sempere
Department of Computer Science and Artificial Intelligence, University of Alicante,
03090 Alicante, Spain
M. Pujol
Department of Computer Science and Artificial Intelligence, University of Alicante,
03090 Alicante, Spain
R. Rizo
Department of Computer Science and Artificial Intelligence, University of Alicante,
03090 Alicante, Spain
M. J. Pujol
Department of Applied Mathematics, University of Alicante, 03090 Alicante, Spain
Xiaodong Zhang
School of Automation, Shenyang Aerospace University, Shenyang 110136, China
xix
Xiaoli Li
School of Automation and Electrical Engineering, University of Science and Technology
Beijing, Beijing 100083, China
Kang Wang
School of Automation and Electrical Engineering, University of Science and Technology
Beijing, Beijing 100083, China
Yanjun Lu
School of Automation, Shenyang Aerospace University, Shenyang 110136, China
Lei Zhang
Institute of Automatic Control Engineering (LSR), Technische Universität München,
80290 München, Germany
Tianguang Zhang
Institute of Automatic Control Engineering (LSR), Technische Universität München,
80290 München, Germany
Haiyan Wu
Institute of Automatic Control Engineering (LSR), Technische Universität München,
80290 München, Germany
Alexander Borst
Department of Systems and Computational Neurobiology, Max Planck Institute of
Neurobiology, Am Klopferspitz 18, D-82152 Martinsried, Germany
Kolja Kühnlenz
Institute of Automatic Control Engineering (LSR), Technische Universität München,
80290 München, Germany
Institute for Advanced Study (IAS), Technische Universität München, 80290 München,
Germany
Daeil Jo
Department of Industrial Engineering, College of Engineering, Ajou University, Suwon,
South Korea
Sergey I. Ivashov
Remote Sensing Laboratory, Bauman Moscow State Technical University, Moscow,
Russia
Alexander B. Tataraidze
Remote Sensing Laboratory, Bauman Moscow State Technical University, Moscow,
Russia
xx
Vladimir V. Razevig
Remote Sensing Laboratory, Bauman Moscow State Technical University, Moscow,
Russia
Eugenia S. Smirnova
Remote Sensing Laboratory, Bauman Moscow State Technical University, Moscow,
Russia
Olga S. Walsh
Department of Plant Sciences, Southwest Research and Extension Center, University of
Idaho, Moscow, ID, USA.
Sanaz Shafian
Department of Plant Sciences, Southwest Research and Extension Center, University of
Idaho, Moscow, ID, USA.
Juliet M. Marshall
Department of Entomology, Plant Pathology, and Nematology, Idaho Falls Research
and Extension Center, University of Idaho, Idaho Falls, ID, USA.
Chad Jackson
Department of Entomology, Plant Pathology, and Nematology, Idaho Falls Research
and Extension Center, University of Idaho, Idaho Falls, ID, USA.
Jordan R. McClintick-Chess
Department of Plant Sciences, Southwest Research and Extension Center, University of
Idaho, Moscow, ID, USA.
Steven M. Blanscet
BASF-Chemical Co., Caldwell, ID, USA.
Kristin Swoboda
Take Flight UAS, LLC, Boise, ID, USA.
Craig Thompson
Take Flight UAS, LLC, Boise, ID, USA.
Kelli M. Belmont
Seminis, Payette, ID, USA.
Willow L. Walsh
Vallivue High School, Caldwell, ID, USA.
xxi
Laura C. Loyola
Spatial Sciences Institute, University of Southern California, Los Angeles, CA, USA.
Jason T. Knowles
Spatial Sciences Institute, University of Southern California, Los Angeles, CA, USA.
GeoAcuity, Los Angeles, CA, USA.
Andrew J. Marx
Spatial Sciences Institute, University of Southern California, Los Angeles, CA, USA.
Ryan McAlinden
Spatial Sciences Institute, University of Southern California, Los Angeles, CA, USA.
Institute for Creative Technologies, University of Southern California, Los Angeles,
CA, USA.
Steven D. Fleming
Spatial Sciences Institute, University of Southern California, Los Angeles, CA, USA.
Sedam Lee
Department of Industrial Engineering, College of Engineering, Ajou University, Suwon,
South Korea
Yongjin Kwon
Department of Industrial Engineering, College of Engineering, Ajou University, Suwon,
South Korea
xxii
LIST OF ABBREVIATIONS
AP Access Point
AVCL Aerial Vehicle Control Language
AOA Angle of Attack
AED Automatic Cardioverter Defibrillator
AUAS Autonomous Unmanned Aerial Systems
BET Blade Element Theory
CRP Calibrated Reflectance Panel
CIC Catalina Island Conservancy
COA Civil Certificate of Waiver Authorization
CW Clockwise
CHs Cluster Heads
COTS Commercial off the Shelf
CASI Compact Airborne Spectrographic Imager
DLS Downwelling Light Sensor
DWN Dubins Waypoint Navigation
ERS Earth Remote Sensing
ESC Electronic Speed Controllers
EMD Elementary Motion Detector
ERR Experimental Railway Ring
FAA Federal Aviation Administration
FSM Finite State Machine
FL Formation Leader
GIST Geographic Information Sciences & Technology
GIS Geographic Information System
GPS Global Positioning System receivers
GUI Graphical User Interface
GNDVI Green Normalized Difference Vegetation Index
GTS Guaranteed Time Slot
IMU Inertial Measurement Units
ICT Institute for Creative Technologies
ICE Internet Communications Engine
IFT Iterative Feedback Tuning
ILC Iterative Learning Control
LQR Linear Quadratic Regulator
MM Master Node
MAVs Microaerial Vehicles
MP Mission Planner
MFAC Model Free Adaptive Control
MT Momentum Theory
NN Neural Networks
NDVI Normalized Difference Vegetation Index
OFN Obstacle Field Navigation
OWT One World Terrain
PLR Packet Loss Ratio
PLS Partial Least Squares
PCA Principal Component Analysis
PID Proportional Integral Derivative
PWM Pulse Width Modulation
RCS Radar Cross Section
RN Relay Nodes
RPL Remote License Pilots
RMSE Root Mean Squared Error
SMC Sliding Mode Control
SSI Spatial Sciences Institute
TIFF Tag Image File Format
UAS Unmanned Aerial Systems
VIs Vegetation Indices
VTOL Vertical Take-Off And Landing
VM Virtual Machine
VR/AR Virtual Reality/Augmented Reality
VRFT Virtual Reference Feedback Tuning
WN Waypoint Navigation
PREFACE
Drones are defined as unmanned aerial vehicles (UAVs) that can fly over
space from a few tens of meters - up to thousands of kilometers, or small
(micro and nano) quadrotors that can move indoors. Today – there is a
rising need of these types of flying vehicles - for civil and for military
applications. There is also significant interest for development of new
types of drones that can fly autonomously – around different locations
and environments, and will accomplish different technical missions
(agriculture, public events – sports/ concerts, space research etc.).
Within the past decade – wide spectra of applications have emerged that
resulted in many variations in the size and the design of the UAV-s (drones).
The year of 2015 was declared as a year that exponentially boosted the
drone applications in many human-oriented fields and environments,
especially in agriculture and forestry (75% of all applications).
The drones categories differ among themselves by their configuration,
platform or application. There are 3 major classes defined in the literature,
and the class I has 4 subclasses (a, b, c, d):
Class I-a) – Nano drones, weight < 200 g.
Class I–b) – Micro drones, weight between 200g – 2 kg.
Class I–c) – Mini drones, weight between 2 kg – 20 kg.
Class I–d) – Small drones, weight between 20 kg – 150 kg.
Class II – Tactical drones, weight between 150-600 kg.
Class III – Strike/ Hale drones, weight over 600 kg.
This edition covers different topics from UAV-s and drones – such as
design and development of UAV-s, trajectory control techniques, small
drones and quadrotors, and application scenarios of UAV-s and drones.
Section 1 focuses on design and development of UAV-s, describing design
and development of a fly-by-wireless UAV platform, review of control
algorithms for autonomous quadrotors, central command architecture for
high-order autonomous unmanned aerial systems, and quadcopter design
for payload delivery.
Section 2 focuses on trajectory control techniques, describing
development of rescue material transport UAV, railway transport
infrastructure monitoring by UAVs and satellites, assessment of UAV
based vegetation indices for nitrogen concentration estimation in spring
wheat, research and teaching applications of remote sensing integrated
with GIS: examples from the field, and development of drone cargo bay
with real-time temperature control.
Section 3 focuses on small drones and quadrotors, describing neural
network control and wireless sensor network-based localization of
quadrotor UAV formations, Dubin’s waypoint, navigation of small-class
unmanned aerial vehicles, modelling oil-spill detection with swarm
drones, a survey of modelling and identification of quadrotor robot, and
visual flight control of a quadrotor using bioinspired motion detector.
Section 4 focuses on application scenarios of UAV-s and drones,
describing .
xxvi
SECTION 1
DESIGN AND
DEVELOPMENT OF UAV-S
CHAPTER
1
Design and Development of a
Fly-by-Wireless UAV Platform
INTRODUCTION
The development of unmanned aerial vehicles (UAVs) has become an active
area of research in recent years, and very interesting devices have been
developed. UAVs are important instruments for numerous applications, such
as forest surveillance and fire detection, coastal and economic exclusive
zone surveillance, detection of watershed pollution and military missions.
Citation: Paulo Carvalhal, Cristina Santos, Manuel Ferreira, Luis Silva and Jose Afon-
so (January 1st 2009). “Design and Development of a Fly-by-Wireless UAV Platform”,
Aerial Vehicles, Thanh Mung Lam, IntechOpen, DOI: 10.5772/6464.
Copyright: © 2009 The Author(s). Licensee IntechOpen. This chapter is distributed
under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0
License
4 Unmanned Aerial Vehicles (UAV) and Drones
The work described in this chapter is part of a larger project, named AIVA,
which involves the design and development of an aerial platform, as well
as the instrumentation, communications, flight control and artificial vision
systems, in order to provide autonomous takeoff, flight mission and landing
maneuvers. The focus of the chapter is on one of the main innovative
aspects of the project: the onboard wireless distributed data acquisition and
control system. Traditionally, UAVs present an architecture consisting of
one centralized and complex unit, with one or more CPUs, to which the
instrumentation devices are connected by wires. At the same time, they
have bulky mechanical connections. In the approach presented here, dubbed
“fly-by-wireless”, the traditional monolithic processing unit is replaced by
several less complex units (wireless nodes), spread out over the aircraft.
In that way, the nodes are placed near the sensors and controlled surfaces,
creating a network of nodes with the capacity of data acquisition, processing
and actuation.
This proposed fly-by-wireless platform provides several advantages
over conventional systems, such as higher flexibility and modularity, as well
as easier installation procedures, due to the elimination of connecting cables.
However, it also introduces several challenges. The wireless network that
supports the onboard distributed data acquisition and control system needs
to satisfy demanding requirements in terms of quality of service (QoS), such
as sustainable throughput, bounded delay and reliable packet delivery. At
the same time, it is necessary to guarantee that the power consumption of the
battery powered wireless nodes is small, in order to increase the autonomy of
the system. Currently there are many different wireless network technologies
available in the market. Section 2 presents an overview of the most relevant
technologies and discusses their suitability to meet the above requirements.
Based on this analysis, we chose the Bluetooth wireless network technology
as the basis for the design and development of a prototype of the fly-by-
wireless system. The system was implemented using commercial off-the-
shelf components, in order to provide a good trade-off between development
costs, reliability and performance. Some other objectives were also pursued
in the development of the system, namely the design of a framework
where communication between nodes is effective and independent of the
technology adopted, the development of a design approach to model the
embedded system and the development of an application oriented operating
system with a modular structure.
The following sections are organized as follows: section 2 presents an
overview of available wireless network technologies, taking into account
Design and Development of a Fly-by-Wireless UAV Platform 5
of the device. At the MAC layer, the Bluetooth devices uses a polling based
protocol that provides support for both real-time and asynchronous traffic.
Bluetooth provides better overall characteristics than the other networks
discussed here for the desired application. It drains much less power than
802.11 and HIPERLAN/2, uses a MAC protocol that provides support
for real-time traffic, and provides a higher gross data rate than ZigBee.
Bluetooth spread spectrum covers a bandwidth of 79 MHz while ZigBee
operates in a band of less than 5 MHz, what makes the former more robust
against interference. Moreover, Bluetooth provides an adaptive frequency
hopping mechanism that avoids frequency bands affected by interference.
Given these characteristics and the availability of the technology at the time
of development, Bluetooth was chosen as the supporting wireless network
technology for the development of the prototype of the system described in
the following sections.
Figure 1. Global electronics architecture of the AIVA UAV platform This ar-
chitecture allows an easy way to introduce or remove processing units from the
platform. For instance new sensors or new vision units can be included. In the
first case a new module must be connected to the Bluetooth piconet, and in the
second case the new module is connected to the local multi-access bus.
8 Unmanned Aerial Vehicles (UAV) and Drones
Physical Architecture
The physical part of the platform is built around a low power Texas
Instruments MSP430 microcontroller, a Von-Neumann 16 bit RISC
architecture with mixed program, data and
I/O in a 64Kbytes address space. Besides its low power profile, which
uses about 280 µA when operating at 1 MHz @ 2.2 Vdc, MSP430 offers some
interesting features, like single cycle register operations, direct memory-
to-memory transfers and a CPU independent hardware multiplication
unit. From the flexibility perspective, a flexible I/O structure capable of
independently dealing with different I/O bits, in terms of data direction,
interrupt programming, and edge triggering selection; two USARTs
supporting SPI or UART protocols; an onboard 12 bit SAR ADC with 200
kHz rate; and PWM capable timers, are all relevant features.
Design and Development of a Fly-by-Wireless UAV Platform 9
The Bluetooth modules chosen for the implementation of the wireless nodes
are OEM serial adapter devices manufactured by connectBlue. The master
node uses an OEMSPA33i module and the slave nodes use OEMSPA13i
modules (connectBlue, 2003). These modules include integrated antennas;
nevertheless, we plan to replace them with modules with external antennas
in future versions of the platform, to be able to shield the modules in order
to increase the reliability of the system against electromagnetic interference.
While the module used on the master (OEMSPA33i) allows up to seven
simultaneous connections, the module used on the slaves (OEMSPA13i) has
a limitation of only three simultaneous connections. However, this limitation
does not represent a constraint to the system because the slaves only need to
establish one connection (to the master).
The connectBlue modules implement a virtual machine (VM) that enables
the provision of a serial interface abstraction to the microcontroller, so Bluetooth
stack details can be ignored and focus can be directed to the application. The
manufacturer’s virtual machine implements a wireless multidrop access scheme
where the master receives all frames sent by the slaves and all slaves can listen to
the frames sent by the master, in a point-to-multipoint topology.The AIVA onboard
wireless system is composed by one Bluetooth piconet containing seven nodes:
one master (MM - Master Module) and six slaves (SAM - Sensing & Actuation
Modules). The nodes are spread over the aircraft structure, as shown in Figure 3.
The master node (MM) is placed at the fuselage body, and acts as the network
and flight controller, onboard data logger, and communications controller for the
link with the ground station. On each wing, there is a SAM node for an electrical
propulsion motor and for control surfaces (ailerons and flaps). These wing nodes
are responsible for motor speed control and operating temperature monitoring, as
well as control surfaces actuation and position feedback.
In the fuselage body, there are other two SAM nodes, one for a GPS module
and other for an inertial measurement unit (IMU), which provide information
assessment for navigational purposes. At the tail, there is another SAM node
for elevator and rudder control, and position feedback. Finally, at the nose
there is a SAM node connected to a nose probe consisting of a proprietary
design based on six independent pressure sensors that give valuable digital
information for flight control. This node also contains an ultrasonic probe
that provides information for support of the automatic take-off and landing
system. Figure 4 displays the physical layout of the nose node. The Bluetooth
module is in the lower corner, the microcontroller is on the left hand side and
the sensor hardware on the right hand side of the board.
10 Unmanned Aerial Vehicles (UAV) and Drones
Logical Architecture
The logical architecture of the developed system is a two layered state
machine implementation, composed by a transport layer and an application
layer. The transport layer provides a packet delivery service, under control
of master node, capable of transparent delivery of data packets across the
network.
The transport layer is application independent, and interfaces with the
top level application layer by means of a data space for buffering and a set
of signaling control bits that allow total synchronization between the two
layers. The hierarchy and signaling between the two layers is represented
in Figure 5.
Design and Development of a Fly-by-Wireless UAV Platform 11
receives messages from application layer, and sends segments to the radio
module in order to be sent over the wireless medium.
Layer 2 is application dependent, and has no knowledge of the lower layer
internal mechanisms, just the services and interfaces available from it. That
means that its logical architecture can be used in other applications. For the fly-
by-wireless application, its main goal is to replicate a system table among all
network nodes, at the maximum possible frequency. This system table maintains
all critical system values, describing the several sensors and actuators, status
parameters, and loop feedback information. Each network node is mapped to a
table’s section, where all related variables from sensing, actuators and metering
are located. This layer is responsible for cyclic refreshing the respective table
contents, based on local status, and also for cyclic actuation according to data
sent from the master node (flight controller orders). This way, the whole system
is viewed as a resident two-dimensional array located at master, with different
partial copies distributed and synchronized among the slave nodes.
former operates in the sky most of time, while the later are normally based
on the ground.
EXPERIMENTAL RESULTS
The performance of the developed wireless system was evaluated in
laboratory. The experimental setup used to achieve the results presented in
this section is composed by 6 slaves sending data periodically to the master
(uplink direction) at the same predefined sampling rate. Each sampling
packet has a length of 15 octets, which is the maximum packet length
supported by the transport layer due to a limitation imposed by the virtual
machine used by the Bluetooth module.
Figure 6 presents the aggregated uplink throughput that reaches the
master node as a function of the sampling rate used by the 6 slaves. Since
Bluetooth uses a contention-free MAC protocol, the uplink transmissions are
not affected by collisions, so the network throughput increases linearly with
the offered load until the point it reaches saturation, which in this scenario
corresponds to the situation where the slaves transmit data at sampling rates
higher than 200 Hz. As this figure shows, the maximum throughput available
to the application is about 160 kbps, which is significantly lower than the gross
data rate provided by Bluetooth (1 Mbps). This difference can be explained
by the overhead introduced by the Bluetooth protocol and the virtual machine,
including the gap between the packets, the FEC (Forward Error Correction)
and ARQ (Automatic Repeat reQuest) mechanisms, the packet headers, as
well as the overhead introduced by control packets such as the POLL packet,
that is sent by the master to grant permission to slaves to transmit.
Figure 7 presents the packet loss ratio (PLR) averaged over the 6 slaves as a
function of the sampling rate. As the figure shows, the PLR is limited to less
than 0.5 % in the region where the network is not congested, but increases
rapidly after the saturation point. The flight control application should be
able to tolerate such small losses; otherwise a change in the supporting
wireless technology should be made in the attempt to obtain higher link
reliability.
Concerning to the delay experienced by the packets as they travel from the
slaves to the master, results showed that the delay is not adversely affected
by the rise in the offered load, as long as the network operates below the
saturation point. For sampling rates up to 200 Hz, the registered average
delay was 27 ms and the standard deviation was 16 ms.
Figure 8 presents the complementary cumulative distribution (P{X>x})
for the random variable X, which represents the delay for all the samples
collected using sampling rates in the range from 0 to 200 Hz. With this
chart, it is possible to see the probability that the delay exceeds a given delay
bound, which is an important metric for real-time applications such as the
one considered in this chapter. The chart shows, for instance, that less than
1 % of the sample packets suffer a delay higher than 90 ms, while less than
0.1 % of the packets suffer a delay higher than 120 ms.
Experimental tests were also made with a varying number of slaves in the
piconet (from 1 to 6), both in the uplink and downlink direction. The average
delay measured in the downlink direction (from the master to the slaves) was
slightly higher than the one registered in the uplink direction, but below 40 ms,
for the measurements made with up to 4 slaves. However, the average master-
to-slave delay with 5 slaves in the network ascended to 600 ms, while with 6
slaves the performance was even worse, with the average delay reaching 1000
ms.
CONCLUSION
This chapter presented the design and development of a fly-by-wireless
UAV platform built on top of Bluetooth wireless technology. The developed
onboard wireless system is composed by one master node, connected to the
flight controller and six slave nodes spread along the aircraft structure and
connected to several sensors and actuators.
In order to assess the suitability of the developed system, several
performance evaluation tests were carried out. The experimental results
showed that, for the slave-to-master direction, the system prototype is
able to support a sampling rate of up to 200 Hz for each of the 6 slaves
simultaneously without significant performance degradation in terms of
throughput, loss or delay. On the other hand, although the master-to-slave
delay with 1 to 4 slaves in the network is low, its value increases significantly
with 5 and 6 slaves, which is unacceptable given the real-time requirements
of the desired application.
16 Unmanned Aerial Vehicles (UAV) and Drones
REFERENCES
1. Afonso, J. A. & Neves, J. E. (2005), Fast Retransmission of Real-Time
Traffic in HIPERLAN/2 Systems, Proceedings of Advanced Industrial
Conference on Telecommunications (AICT2005), pp. 34-38, ISBN
0-7695-2388-9, Lisbon, Portugal, July 2005, IEEE Computer Society.
2. Bluetooth SIG (2003), Specification of the Bluetooth system. Available
at http://www.bluetooth.org/.
3. connectBlue (2003), Serial Port Adapter – 2nd Generation, User
Manual. Available at http://www.connectblue.com/.
4. ETSI TR 101 683 V1.1.1 (2000), Broadband Radio Access Networks
(BRAN)—HIPERLAN Type 2—Data Link Control (DLC) Layer—
Part 1: Basic Data Transport Functions.
5. IEEE Std 802.11 (2007), IEEE Standard for Information Technology—
Telecommunications and information exchange between systems—
Local and metropolitan area networks—Specific requirements—
Part11: Wireless LAN Medium Access Control (MAC) and Physical
Layer (PHY) Specifications.
6. IEEE Std 802.15.4 (2006), IEEE Standard for Information Technology—
Telecommunications and information exchange between systems—
Local and metropolitan area networks—Specific requirements—Part
15.4: Wireless Medium Access Control (MAC) and Physical Layer
(PHY) Specifications for Low-Rate Wireless Personal Area Networks
(WPANs).
7. ZigBee Standards Organization (2006), ZigBee Specification.
Available at http://www.zigbee.org/.
CHAPTER
2
A Review of Control Algorithms for
Autonomous Quadrotors
ABSTRACT
The quadrotor unmanned aerial vehicle is a great platform for control
systems research as its nonlinear nature and under-actuated configuration
make it ideal to synthesize and analyze control algorithms. After a brief
explanation of the system, several algorithms have been analyzed including
their advantages and disadvantages: PID, Linear Quadratic Regulator
Citation: Zulu, A. and John, S. (2014), “A Review of Control Algorithms for Au-
tonomous Quadrotors”. Open Journal of Applied Sciences, 4, 547-556. doi: 10.4236/
ojapps.2014.414053.
Copyright: © 2014 by authors and Scientific Research Publishing Inc. This work is li-
censed under the Creative Commons Attribution International License (CC BY). http://
creativecommons.org/licenses/by/4.0
20 Unmanned Aerial Vehicles (UAV) and Drones
INTRODUCTION
The quadrotor unmanned aerial vehicle (UAV) are helicopters with four
rotors typically designed in a cross configuration with two pairs of opposite
rotors rotating clockwise and the other rotor pair rotating counter-clockwise
to balance the torque. The roll, pitch, yaw and up-thrust actions are controlled
by changing the thrusts of the rotors using pulse width modulation (PWM)
to give the desired output.
Civilian use of unmanned/micro aerial vehicles (UAVs/MAVs) include
aerial photography for mapping and news coverage, inspection of power
lines, atmospheric analysis for weather forecasts, traffic monitoring in urban
areas, crop monitoring and spraying, border patrol, surveillance for illegal
imports and exports, fire detection and control, search and rescue operations
for missing persons and natural disasters.
In this work, the focus is to realize the best configuration and control
scheme of the quadrotor MAV for application in game counting in the
protected game reserves in Africa. The current situation in most African
countries is that researchers and game rangers either count from road
vehicles by driving by the road or use conventional helicopters. The former
is ineffective as the count is limited to only animals coming close to the road
whereas the later leads to inaccurate counts as the animals run for safety
when a helicopter hovers above them. Several advantages accrue to the
quadrotor which has led to much attention from researchers. This is mainly
due to its simple design, high agility and maneuverability, relatively better
payload, vertical take-off and landing (VTOL) ability.
The quadrotor does not have complex mechanical control linkages due
to the fact that it relies on fixed pitch rotors and uses the variation in motor
speed for vehicle control [1]. However, these advantages come at a price
as controlling a quadrotor is not easy because of the coupled dynamics
and its commonly under-actuated design configuration [2]. In addition, the
A Review of Control Algorithms for Autonomous Quadrotors 21
MATHEMATICAL MODEL
In this section a brief description of the main mathematical equation for the
quadrotor is explained.
In flight mode, the quadrotor is able to generate its lift forces by controlling
the fixed pitch angle rotor speeds. Figure 1 presents the schematic diagram.
The center of mass is the origin O of the coordinate system (see Figure 1)
and the forward direction is arbitrarily fixed in the x-axis. To lift off from
ground, all four rotors are rotated at the same speed in the sense shown. A
total thrust equal to the weight of the system stabilizes the system in hover.
To roll about the x-axis, differential thrust of rotors 2 and 4 is applied. To
pitch about the y-axis, differential thrust of rotors 1 and 3 is applied. The
yaw motion is achieved by the differential thrusts of the opposite rotors (1/3
or 2/4) while also adjusting the constant thrusts of the remaining opposite
pairs to maintain altitude.
Using the Newton-Euler formalism, the equations of motion of the
quadrotor are given in [4]:
where Φ , θ and ψ are the roll, pitch and yaw angles respectively; are
the mass moments of inertia in the x , y , and z axes respectively; Jr is the rotor inertia;
Ω is the angular velocity of the rotor; l is the length of the rotor arm from the origin of
the coordinate system; and the inputs to the four rotors are given by the expressions:
where b and d are thrust and drag coefficients respectively. For detailed
derivation of the equations, consult [4].
the major challenges with the quadrotor include the non-linearity associated
with the mathematical model and the imprecise nature of the model due to
unmodeled or inaccurate mathematical modeling of some of the dynamics.
Therefore applying PID controller to the quadrotor limits its performance.
A PID controller was used for the attitude control of a quadrotor in [6],
while a dynamic surface control (DSC) was used for the altitude control.
Applying Lyapunov stability criteria, Lee et al. were able to prove that all
signals of the quadrotor were uniformly ultimately bounded. This meant that
the quadrotor was stable for hovering conditions. From the simulation and
experimental plots however, it reveals the PID controller to have performed
better in the pitch angle tracking, whereas large steady state errors could be
observed in the roll angle tracking.
In another work by Li and Li [7], a PID controller was applied to regulate
both position and orientation of a quadrotor. The PID parameter gains were
chosen intuitively. The performance of the PID controller indicated relatively
good attitude stabilization. The response time was good, with almost zero
steady state error and with a slight overshoot.
It is generally established in the literature that the PID controller has
been successfully applied to the quadrotor though with some limitations.
The tuning of the PID controller could pose some challenges as this must
be conducted around the equilibrium point, which is the hover point, to give
good performance.
Figure 2 shows the general block diagram of a PID controller for the
quadrotor.
Feedback Linearization
Feedback linearization control algorithms transform a nonlinear system
model into an equivalent linear system through a change of variables.
Some limitation of feedback linearization is the loss of precision due to
linearization and requiring an exact model for implementation. Output
feedback linearization was implemented as an adaptive control strategy in
28 Unmanned Aerial Vehicles (UAV) and Drones
[16] for stabilization and trajectory tracking on a quadrotor that had dynamic
changes of its center of gravity. The controller was able to stabilize the
quadrotor and reconfigure it in real time when the center of gravity changed.
Feedback linearization and input dynamic inversion were implemented
by Roza and Maggiore in [23] to design a path-following controller which
allowed the designer to specify the speed profile and yaw angle as a function
of displacement along the path. Two simulation cases with the quadrotor
travelling at different speeds along the path were considered. Both cases
showed convergence of velocity and yaw angle.
Feedback linearization was compared with adaptive sliding mode control
in [24]. The feedback controller, with simplified dynamics, was found to be
very sensitive to sensor noise and not robust. The sliding mode controller
performed well under noisy conditions and adaptation was able to estimate
uncertainties such as ground effect.
Hence, feedback linearization nonlinear control shows good tracking
but poor disturbance rejection. However, feed-back linearization applied
with another algorithm with less sensitivity to noise give good performance.
with respect to achieving desired attitude and reducing weight drift. Output
feedback control was implemented on a quadrotor using neural networks [27]
for leader-follower quadrotor formation to learn the complete dynamics of
the UAV including un-modeled dynamics. A virtual neural network control
was implemented to control all six degrees of freedom from four control
inputs. Using Lyapunov stability theory, it was shown that the position,
orientation, velocity tracking errors, observer estimation errors and virtual
control were semi-globally uniformly ultimately bounded.
An adaptive neural network scheme was applied in [28] for quadrotor
stabilization in the presence of a sinusoidal disturbance. The proposed
solution of two parallel single hidden layers proved fruitful as reduced
tracking error and no weight drift were achieved.
Figure 6 shows the general block diagram of a fuzzy logic controller
implemented on the quadrotor.
whilst maintaining stability of roll and pitch angles. Neural networks were
used to compensate for unmodeled dynamics. The major contribution of the
paper was the fact that the controller did not require the dynamic model and
other parameters. This meant greater versatility and robustness.
This paper has reviewed several common control algorithms that have
A Review of Control Algorithms for Autonomous Quadrotors 31
REFERENCES
1. Huo, X., Huo, M. and Karimi, H.R. (2014) Attitude Stabilization
Control of a Quadrotor UAV by Using Backstepping Approach.
Mathematical Problems in Engineering, 2014, 1-9.
2. Mahony, R., Kumar, V. and Corke, P. (2012) Multirotor Aerial Vehicles:
Modeling, Estimation, and Control of Quadrotor. Robotics Automation
Magazine, 19, 20-32. http://dx.doi.org/10.1109/MRA.2012.2206474
3. Lee, B.-Y., Lee, H.-I. and Tahk, M.-J. (2013) Analysis of Adaptive
Control Using On-Line Neural Networks for a Quadrotor UAV.
13th International Conference on Control, Automation and Systems
(ICCAS), 20-23 October 2013, 1840- 1844.
4. Bouabdallah, S. (2007) Design and Control of Quadrotors with
Application to Autonomous Flying. Ph.D. Dissertation, Lausanne
Polytechnic University, Lausanne.
5. John, S. (2013) Artificial Intelligent-Based Feedforward Optimized
PID Wheel Slip Controller. AFRICON, 12 September 2013, Pointe-
Aux-Piments, 1-6.
6. Lee, K.U., Kim, H.S., Park, J.-B. and Choi, Y.-H. (2012) Hovering
Control of a Quadroto. 12th International Conference on Control,
Automation and Systems (ICCAS), 17-21 October 2012, 162-167.
7. Li, J. and Li, Y. (2011) Dynamic Analysis and PID Control for a
Quadrotor. International Conference on Mechatronics and Automation
(ICMA), 7-10 August 2011, 573-578.
8. Bouabdallah, S., Noth, A. and Siegwart, R. (2004) PID vs LQ Control
Techniques Applied to an Indoor Micro Quadrotor. Proceedings of
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS 2004), Vol. 3, 28 September-2 October 2004, 2451-2456.
9. Cowling, I.D., Yakimenko, O.A., Whidborne, J.F. and Cooke, A.K.
(2007) A Prototype of an Autonomous Controller for a Quadrotor UAV.
European Control Conference, July 2007, Kos, 1-8.
10. Minh, L.D. and Ha, C. (2010) Modeling and Control of Quadrotor
MAV Using Vision-Based Measurement. International Forum on
Strategic Technology (IFOST), 13-15 October 2010, 70-75.
11. Xu, R. and Ozguner, U. (2006) Sliding Mode Control of a Quadrotor
Helicopter. Proceedings of the 45th IEEE Conference on Decision and
Control, San Diego, 13-15 December 2006, 4957-4962.
12. Runcharoon, K. and Srichatrapimuk, V. (2013) Sliding Mode Control
A Review of Control Algorithms for Autonomous Quadrotors 33
32. Cetinsoy, E., Dikyar, S., Hancer, C., Oner, K.T., Sirimoglu, E., Unel, M.
and Aksit, M.F. (2012) Design and Construction of a Novel Quad Tilt-
Wing UAV. Mechatronics, 22, 723-745. http://dx.doi.org/10.1016/j.
mechatronics.2012.03.003
33. Ryll, M., Bulthoff, H. and Giordano, P. (2012) Modeling and Control
of a Quadrotor UAV with Tilting Propellers. Proceedings of the 2012
IEEE International Conference on Robotics and Automation (ICRA),
Saint Paul, 14-18 May 2012, 4606-4613.
34. Senkul, F. and Altug, E. (2013) Modeling and Control of a Novel Tilt—
Roll Rotor Quadrotor UAV. Proceedings of the 2013 International
Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, 28-31
May 2013, 1071-1076.
CHAPTER
3
Central Command Architecture for
High-Order Autonomous Unmanned
Aerial Systems
ABSTRACT
This paper is the first in a two-part series that introduces an easy-to-implement
central command architecture for high-order autonomous unmanned aerial
systems. This paper discusses the development and the second paper
presents the flight test results. As shown in this paper, the central command
architecture consists of a central command block, an autonomous planning
block, and an autonomous flight controls block. The central command block
includes a staging process that converts an objective into tasks independent
of the vehicle (agent). The autonomous planning block contains a non-
iterative sequence of algorithms that govern routing, vehicle assignment,
and deconfliction. The autonomous flight controls block employs modern
controls principles, dividing the control input into a guidance part and a
regulation part. A novel feature of high-order central command, as this paper
shows, is the elimination of operator-directed vehicle tasking and the manner
in which deconfliction is treated. A detailed example illustrates different
features of the architecture.
Keywords: Unmanned Aerial Vehicles, Autonomy, Central Command,
High-Order Systems, Deconfliction
INTRODUCTION
Unmanned aerial vehicles (UAV) are gaining interest with relaxing
restrictions in civilian airspaces. Looking ahead, a systematic approach is
needed for autonomous unmanned aerial systems (AUAS) consisting of a
large number of vehicles (agents). This paper, which is the first in a series
of two papers, presents an easy-to-implement architecture for high-order
AUAS; the second paper presents flight test results. The different AUAS
approaches are referred to in this paper as command approaches with
central command at one extreme and individual command at the other. In
central command, the AUAS authority lies in one entity, and the vehicle
requires only limited information about its environment. Central command
is naturally suited to problems dictated by global objectives, such as search
and surveillance in precision agriculture, border security, law enforcement,
and wildlife and forestry services, to name a few. In individual command, the
authority lies in the individual vehicle. The vehicle requires local situational
awareness, communication with other vehicles, and understanding of system
objectives. Individual command is naturally suited to problems dictated
by local objectives that require coordination, like those found in air traffic
control problems and in coordinated pick and place problems.
When an AUAS operates multiple vehicles in close proximity some
method of preventing collisions becomes necessary. Collision avoidance,
or the real-time act of recognizing and maneuvering to avoid an impending
conflict, also called “see and avoid”, is a fundamental principle of current
flight operations and will be a requirement of future operations that share
manned airspace. In contrast, deconfliction, or the act of planning paths in
Central Command Architecture for High-Order Autonomous ... 39
METHOD
Staging
As shown in Figure 1, a user is provided information from an operational
area and other field parameters, and data Cj (j = 1, 2, …) generated by AUAV
payload Pj (j = 1, 2, …). Using this and other information, the user commands
an objective. In absence of fully autonomous planning algorithms that
remove the vehicle from the calculus, user load would be, at the very least,
proportional to the number of vehicles, which is prohibitive in high-order
systems. Thus a critical feature of the HOCC architecture introduced in
this paper is the decoupling of the objective from the vehicles and, instead,
the expression of the commands in terms of a relatively small number of
parameters. Operator commanded dynamic re-tasking of individual vehicles
no longer applies.
The staging process converts an objective into a set of tasks. This
paper considers spatial staging, such as passing over a point (for imagery
or delivery), a line scan (following a line segment) which could be part
of covering a wider region, and loitering (orbiting a point) which could
arise when a vehicle is waiting for further instructions or to monitor a point
of interest. The spatial tasks have starting points, so the conversion of an
objective into a set of tasks determines a set of spatial points that vehicles
need to reach. During the staging process the candidate routes are unknown
and the assignment of vehicles to tasks is unknown.
Routing
Autonomous planning is the second block of the HOCC architecture and
routing is its first process. Support tasks, such as launching or recovering
and refueling, are conducted by the administrator (A) who operates in the
background. The routing process itself is performed in two parts, mapping
the environment and choosing a path [10] . The first part produces a graph of
42 Unmanned Aerial Vehicles (UAV) and Drones
a vehicle-task route segment. In this paper, the Visibility Graph method [11]
is employed. Briefly, this method produces a graph of possible connections
between vehicles and obstacle vertices, tasks and vertices, and between the
vertices themselves, representing a graph of possible routes through the
environment. Note that search areas are regarded as obstacles to prevent
routes from passing through them, which can otherwise lead to conflicts.
After determining the graph between a vehicle point and a task, the second
part is to find the shortest route between them. This part is performed by the
A* method [12] . This method does not first find the different paths from
which the shortest path could be found. Instead, it searches in the direction
of the task, and can leave portions of the graph unsearched. The shortest
vehicle-task routes are collected into a matrix of vehicle-task route lengths
used in vehicle assignment. New routing is calculated each time a vehicle
completes a task and when a new objective is initiated.
Vehicle Assignment
A frequent goal in AUAS path planning, because of limited fuel and flight
duration requirements, is to determine vehicle routes by minimizing distance
of travel of individual vehicles. An important variation on this goal, which is
introduced in this paper, is the minimization of total distance of travel of all
of the vehicles (a min-sum assignment). The minimization of total distance
of travel leads to an importing routing principle: The routes determined by
minimizing total distance of travel do not cross. The mathematical proof of
this routing principle is presented in the Appendix. Although non-crossing is
guaranteed, the paths can still touch obstacles. Indeed, vehicle-task segments
can share vertices of obstacles, as shown in Figure 2 below.
As shown, two vehicle task routes share vertex C. The vehicle at A travels
to C and then to A*. The vehicle at B travels to C and then to B*. Notice that
the total distance of travel after the shared vertex, CA* + CB*, is the same
whether A travels to A* or to B*. Indeed, the solution to the minimization
problem is indeterminate when two routes share a vertex. The presence of
this indeterminacy, however, presents an opportunity. It allows the algorithm
to ensure that the vehicles do not conflict after passing the shared vertex C,
such as a task at A* requiring a vehicle to loiter for a short time, thereby
staying very near the path of the vehicle proceeding to B*. Assume that the
vehicles begin at the same time and that vehicle B reaches C before vehicle
A. The algorithm selects the vehicle that first reaches vertex C to travel to
the farthest point, in this case, the vehicle at B travels to B*. The inspection
of a potential conflict associated with a separation distance falling below
Central Command Architecture for High-Order Autonomous ... 43
Deconfliction
As described above and proven in the Appendix, potential conflicts in this
HOCC architecture are treated mainly by avoiding them in the first place (in
contrast with collision avoidance). As stated earlier, this was accomplished
by utilizing the non-crossing property of a min-sum assignment. Non-
crossing is a more powerful statement than deconfliction. A deconflicted
route means two vehicles must not occupy the same space at the same time,
but the non-crossing property shows that the entire length of the paths do
not cross at any time. This decouples vehicle speed from the path planning
and is useful in solving the potential conflicts in the vicinity of obstacle
vertices. By loosening any initial assumptions about vehicle speed, shared
44 Unmanned Aerial Vehicles (UAV) and Drones
where KGk is the guidance operator, KRk is the regulator operator, and
where the control input is the sum of a guidance input uGk, a regulator input
uRk, and a disturbance input uDk.
This is called the modern control form [15] . Note that the operator
notation above does not specify the realization, that is, whether the system
is represented by differential equations, difference equations, algebraic
equations, or a hybrid. Under ideal conditions (KGk = Lk and the guidance
input is smooth), the guidance input causes the vehicle to follow the
guidance state vector exactly. It follows that the input-output relationship
for the vehicle, the error, and the characteristic equation are:
where I is the identity operator and ek is the state error vector. The
characteristic equation is independent of the guidance state vector. Therefore,
the regulator gain matrix GRk determines the vehicle’s stability characteristics
(settling time, peak-overshoot, and steady-state error) independent of the
guidance state vector rk.
These best practices pertaining to autonomous flight controls are
summarized as an AUAS Flight Controls Principle: Assume that a realizable
guidance path has been produced and that a vehicle has the control authority
capable of maintaining the guidance path within certain limits placed on
settling time, peak overshoot, and steady-state error.
Then autonomous guidance and autonomous regulation can be designed
by separate and distinct methods.
Following best practices in autonomous flight controls lead to vehicle
paths that more closely follow their guidance paths, which is of particular
interest in autonomous systems that are not instrumented with on-board
collision avoidance systems.
46 Unmanned Aerial Vehicles (UAV) and Drones
RESULTS
Problem Statement
Four vehicles loiter in an air space that contains a no-fly zone (obstacle), as
shown in Figure 3. At time t = 0, an AUAS user commands the AUAS to
search area A. At time t = T, while the vehicles are searching area A, the user
gives a second command to search area B. The AUAS, once completing its
two objectives, is commanded to return to the original loiter points.
HOCC Solution
The scenario begins with the AUAS user commanding the first objective
to survey area A. The command to survey area B is made later. The staging
process converts the objective to search area A into 5 line segment tasks
after which the vehicles are tasked to return to their original loiter points
(see Figure 4). Each line segment is regarded as a task and a vehicle can
follow any line segment from the left or from the right.
Figure 4. Part of the graph between a vehicle and the end point of a task.
Central Command Architecture for High-Order Autonomous ... 47
Figure 4 shows the first step in determining the shortest route between any
one vehicle point and the end points associated with a task. There are two end
points associated with each task since the vehicle is allowed here to follow
a line segment from the left or the right. Consider the vehicle point and the
pair of end points associated with the bottom line segment, as shown. The
graph connects solid dots that designate allowable points and the squares that
designate obstacle vertices (points designated by circles are not allowed).
This figure only shows some of the possible connections in the graph. The
first step in determining the best route, when employing the visibility graph
method, is to produce a graph that connects the vehicle point to the two
end points. The number of different routes along the segments, as shown,
is a combinatorial problem, and even enumerating all of the possibilities
is computationally expensive in high-order AUAS. As an alternative, the
visibility graph method is used, recognizing that it does not consider all of
the possibilities.
After determining the possible graphs between a vehicle point and a
task, the next step is to find the shortest route between them. The shortest
vehicle-task routes between one of the vehicles and each of the five tasks
are shown in Figure 5. There are comparable shortest vehicle-task routes
between each of the other vehicles and the five tasks, as well (not shown).
It happens that the best end points are each on the right side of the search
area for the vehicle shown. You will see in Figure 6 that there is a different
vehicle for which its shortest vehicle task route is to an end point located on
the left side of the search area.
Once the shortest vehicle-task routes are determined, vehicles need to be
assigned to tasks. This is done by minimizing the total distance of travel of
the vehicles (see Figure 6).
In Figure 7, the vehicles are shown on their way to their first task when
a second objective arrives. The staging process turns the second search area
into 5 new line segment tasks. The vehicles are shown at the instant that the
new objective arrives.
As shown, two tasks are currently being executed, and their end-points
are no longer available to be assigned. The points that are available are
shown as solid dots. When no longer available, the end-points are indicated
by circles. Also, notice that past routes are indicated by thin lines and that
the proposed routes are indicated by thick lines. As shown, there will be a
total of ten tasks and two objectives so the total number of time intervals
will be twelve.
48 Unmanned Aerial Vehicles (UAV) and Drones
Figure 8 is shown at the same instant as Figure 7 but the newly calculated
routes are now shown. As shown, only the assignment of the green vehicle
changes and heads to the new search area. Note that vehicles in the process
of executing a task are assigned to new tasks from the end point of the
task currently being executed. When the new objective arrived, the blue and
yellow vehicles were performing tasks, so their proposed paths beyond their
current tasks were also included in the calculation. The proposed route of
the yellow vehicle is to the left end point of the task just below its current
task and the proposed route of the blue vehicle is to the right end of the task
just above its current task. The red vehicle, after the new calculation, is
assigned to the same end point as in the previous time interval.
Figure 5. Five shortest vehicle-task paths for one of the vehicles during the
1st time interval.
Figure 7. A new objective during the 1st time interval initiates the 2nd time
interval.
Central Command Architecture for High-Order Autonomous ... 49
In Figure 11, the green vehicle has just completed its task, triggering the
9th time interval. The green vehicle has been assigned the task of following
the middle task in search area B, completing the available tasks and leaving
the red vehicle unassigned. This unassigned vehicle is assigned to a loiter
point to await further tasks.
In Figure 12, the green vehicle has just completed a task, triggering the
10th time interval. During this time interval, the bottom left vertex of the
2nd search area is a shared vertex. The green vehicle arrives at the vertex
before the blue vehicle so the green vehicle returns to the loiter point that is
farther from the vertex.
In Figure 13, the yellow vehicle completes the last task, triggering the
12th time interval. The red vehicle is already at a loiter point, the green vehicle
is on route to a loiter point leaving the yellow and blue vehicles that happen
to be tasked to pass the bottom left vertex in search area B. The blue vehicle
reaches the vertex first, so it is routed to the farthest remaining vertex.
REFERENCES
1. Bellingham, J., Tillerson, M., Richards, A. and How, J.P. (2003) Multi-
Task Allocation and Path Planning for Cooperative UAVs. In: Butenko,
S., Murphey, R. and Pardalos, P.M., Eds., Cooperative Control: Models,
Applications and Algorithms, Kluwer Academic Publishers, Boston,
23-39.
2. Cummings, M.L., Bruni, S., Mercier, S. and Mitchell, P.J. (2007)
Automation Architecture for Single Operator, Multiple UAV Command
and Control. The International Command and Control Journal, 1, 1-24.
3. Shima, T. and Rassmussen, S. (Eds.) (2009) UAV Cooperative
Decision and Control. Society for Industrial and Applied Mathematics,
Philadelphia. http://dx.doi.org/10.1137/1.9780898718584
4. Adams, J.A., Humphrey, C.M., Goodrich, M.A., Cooper, J.L., Morse,
B.S., Engh, C. and Rasmussen, N. (2009) Cognitive Task Analysis for
Developing Unmanned Aerial Vehicle Wilderness Search Support.
Journal of Cognitive Engineering and Decision Making, 3, 1-26. http://
dx.doi.org/10.1518/155534309X431926
5. Donmez, B.D., Nehme, C. and Cummings, M.L. (2010) Modeling
Workload Impact in Multiple Unmanned Vehicle Supervisory
Control. IEEE Transactions on Systems, Man, and Cybernetics, Part
A: Systems and Humans, 40, 11801190. http://dx.doi.org/10.1109/
TSMCA.2010.2046731
6. Gardiner, B., Ahmad, W., Cooper, T., Haveard, M., Holt, J. and Biaz, S.
(2011) Collision Avoidance Techniques for Unmanned Aerial Vehicles.
Auburn University, Auburn.
7. How, J., Ellis King, E. and Kuwata, Y. (2004) Flight Demonstrations
of Cooperative Control for UAV Teams. AIAA 3rd “Unmanned
Unlimited” Technical Conference, Workshop and Exhibit, Chicago,
20-23 September 2004.
8. Edwards, D. and Silverberg, L.M. (2010) Autonomous Soaring: The
Montague Cross-Country Challenge. Journal of Aircraft, 47, 1763-
1769. http://dx.doi.org/10.2514/1.C000287
9. Levedahl, B. and Silverberg, L.M. (2005) Autonomous Coordination
of Aircraft Formations Using Direct and NearestNeighbor Approaches.
Journal of Aircraft, 42, 469-477. http://dx.doi.org/10.2514/1.6868
10. Tsourdos, A., White, B. and Shanmugavel, M. (2011) Cooperative Path
Planning of Unmanned Aerial Vehicles. Wiley, New York.
Central Command Architecture for High-Order Autonomous ... 57
4
Quadcopter Design for Payload
Delivery
ABSTRACT
This paper presents the design and implementation of a quadcopter capable
of payload delivery. A quadcopter is a unique unmanned aerial vehicle
which has the capability of vertical take-off and landing. In this design, the
quadcopter was controlled wirelessly from a ground control station using
INTRODUCTION
Quadcopters are unique unmanned aerial vehicles which have a vertical
take-off and landing ability [1]. In the past, it was primarily used in the
military, where it was deployed in hostile territory, to reduce pilot losses.
The recent reduction in cost and feature size of semi-conductor logic has
led to the application of quadcopters in many other areas like weather
monitoring, forest fire detection, traffic control, cargo transport, emergency
search and rescue, communication relaying, etc. Quadcopters operate in
fixed rotor propulsion mode, an advantage over the traditional helicopter.
They have four rotors which are arranged in such a way that rotors on
transverse ends rotate in the same direction, while the other two operate
in the opposite direction [1]. Changing the parameters of the attitude (roll,
pitch and yaw), and altitude determines the behavior and position of the
quadcopter. The throttle on each rotor changes its attitude. The UAVs can
be broadly classified into two categories: fixed wing versus rotary wing,
each with their own strengths and weaknesses. For instance, the fixed-wing
UAVs usually have high speed, and heavy payload, but they must maintain
a continuous forward motion to remain in the air [2].
The applications for quadcopters have recently grown significantly, and
related works are presented here. [3] presented an altitude controller design
for UAVs in real time applications. The author described a controller design
method for the hovering control of UAVs and automatic vertical take-off
systems. In 2012, the attitude and altitude dynamics of an outdoor quadrotor
were controlled using two different structures of proportional integral
derivative (PID) controllers [4]. The tests showed that the two structures
were successful. [5] presents a Quadcopter for surveillance monitoring. In
[6], the design and control of VertiKUL, a Vertical Take-Off and Landing
(VTOL) transitioning tail-sitter Unmanned Aerial Vehicle (UAV) was
Quadcopter Design for Payload Delivery 61
presented. The UAV is capable of hover flight and forward flight, and its
main application is for parcel delivery.
METHODOLOGY
In this work, aquadcopter was first modeled mathematically, after which a
simulation was carried out in MATLAB by designing a PID controller, which
was applied to the mathematical model. The PID controller parameters were
then applied to the real system. Finally, the output of the simulation and the
real system were compared. In this paper, attitude and altitude dynamics
of a quadcopter were considered. Therefore, roll, pitch, yaw, and altitude
dynamics were modelled.
(2.2)
The angular velocity of the quadcopter can be written as:
(2.3)
62 Unmanned Aerial Vehicles (UAV) and Drones
(2.7)
(2.8)
(2.9)
(2.10)
Therefore,
(2.11)
Hence, state vector and state equations considering the yaw, roll and the
pitch dynamics of the platform are expressed below:
(2.12)
(2.13)
Also, by applying the force and moment balance laws, we have the
following quadcopter motion formulation:
64 Unmanned Aerial Vehicles (UAV) and Drones
(2.14)
(2.15)
(2.16)
where Ki is the drag coefficient (assume zero since drag is negligible at
low speed).
The angle movement of Quadcopter is given by
(2.17)
(2.18)
Controller Input
Quadcopter has four controller input U1, U2, U3, and U4.
(2.19)
(2.20)
(2.21)
(2.22)
where,
Th1 = thrust generated by each motor
l = force to moment scaling factor
Ii = moment of inertia with respect to the axes.
The second derivative of Euler angles are:
(2.23)
(2.24)
(2.25)
Quadcopter Design for Payload Delivery 65
(2.27)
This equation reveals position in z-axis and velocity in z-axis as two
states.
unique angle value for each axis that can represent an absolute angle of
rotation or a variation with respect to the previous time step. Therefore,
one can use a total of only three closed-loop systems instead of six. Also,
there are three multiple ways of combining those signals, and the control
system efficiency will directly depend on which way is chosen. Since the
sensors can have errors (e.g. noise and drift), a method that minimizes the
main problems of each sensor is chosen and the best features of each can be
combined.
(2.28)
(2.29)
(2.30)
(2.31)
where, mi = signal sent to the ith motor, T = throttle signal and
control signals (for pitch, roll and yaw axes respectively)
It can be seen in the equations above that those signals are used in a
differential mode due to the plant symmetry. Thus, another important detail
is that the signals of each term inside each equation depends on each axis
reference (e.g. whether positive pitch rotation is clockwise or counter-
clockwise) as well as where each motor is positioned. So one must know
the motor configuration (whether plus + or cross ×) before writing the
equations. Finally, each motor will be individually controlled in an open-
loop configuration which can already lead us to a flyable quadcopter.
Quadcopter Design for Payload Delivery 67
SIMULATION
The quadcopter design details were used to produce a model to be constructed
in the simulation window. Some effects (like aerodynamic effects such as
blade flapping) were ignored or dramatically simplified, thus, a limitation
to the model. The system model contains five major blocks (systems) each
containing its own subsystem or unit blocks that describe the overall system
dynamics of our quadcopter. The system blocks include: i) Path Command
block; ii) Position Controller block; iii) Attitude Controller block; iv)
Quadcopter Control mixing block; v) Quadcopter Dynamics block.
Figure 3. (a) Simulink block for the pitch (Theta) correction; (b) Simulink
block for the yaw (Psi) correction.
(3.2)
(3.3)
. (3.4)
Quadcopter Design for Payload Delivery 69
Quadcopter Dynamics
The quadcopter dynamics block contains two major subsystem blocks: The
motor dynamics block, and The State Equations block. The Primary function
of the Motor dynamics block is to convert the percentage throttle command
to the motor RPM.
The motor dynamics block contains the state space model;
(3.5)
(3.6)
where,
The four RPM values generated are fed into the State equations block which
contains a level two S-function block, where the flight dynamic state equations
are loaded. This S-function is written in MATLAB. The S-function block
holds the initial conditions (Table 1) used for the simulation. Its parameters are
the quadcopter model functions and the input initial conditions. The outputs,
describes the physical behavior of the quadcopter.
The main function of the S-function block is to use various parameters
fed into it, to calculate the quadcopter outputs (Angular velocities, linear
velocities, position (along the x, y and z axes) and attitude (yaw, pitch and
roll)) which are then used for the simulation plots and 3D animation. Also, a
disturbance block was added as input to the S-function block to simulate for
response to external forces which might act on the quadcopter body during
flight and how it can compensate and recover from these disturbances whilst
still trying to maintain flight stability (set point).
The data used for the quadcopter simulation was based on the design
model presented. The individual parameters of the components that make up
the quadcopter were carefully calculated. Some parameters such as Torque,
Thrust and Time Constants where estimated from already measured data of a
similar system. This estimation is due to the unavailability of necessary test
bench (and instruments) required to carefully monitor and measure these
parameters. Since our design model is a quadcopter for payload delivery,
our simulation focused on the following: 1) Load test (maximum allowable
load on-board); 2) Hover test (with load and without load); 3) Disturbance
test (with load and without load).
70 Unmanned Aerial Vehicles (UAV) and Drones
The PID Control gains were carefully tuned and the appropriate gain
value was chosen based on the simulation results obtained. The PID gains
used in the Attitude controller are presented in Table 1 while the PD gains of
the position controller are presented in Table 2.
Hover Test
The hover simulation of a quadcopter in flight describes its ability to float
at/about a fixed point in air. Hover test was done at 10 feet and the results
are given below.
The graphs shown in Figure 4 represent a stable system. At first, there
was an overshoot, but with time the controller sets each of these parameters
to set point.
Disturbance Test
Disturbance was introduced to the quadcopter flight simulation which could
represent the action of external force like wind on the body of the quadcopter
while in flight. The graph in Figure 5 represents the system without any
disturbance. However, as can be seen from the graph in Figure 6, when a
disturbance was introduced, the quadcopter was able to compensate properly.
Therefore, the PID controller logic is functional and it can be implemented
for the actual system.
The effect of a disturbance to the flight path was further tested with
regards to linear and angular velocity is presented in Figure 7. The system
acknowledged the disturbance but was able to compensate adequately, and
return to a stable flight.
72 Unmanned Aerial Vehicles (UAV) and Drones
Load Test
Since the quadcopter design model is for payload delivery, the simulation
shows how an increase in the gross mass of the quadcopter affects its motor
speed. The result is presented in Figure 8 and Figure 9. Again, the system
was able to maintain a stable flight.
external disturbances. The Quad X structural model was used to enable the
Quadcopter to gain more speed over a short period of operation, as timely
delivery of payload is an important factor. Future study would be to equip the
quadcopter with a solar powered system, and to encrypt the communication
between the quadcopter and the ground station.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest.
Quadcopter Design for Payload Delivery 75
REFERENCES
1. Gupte, S., Mohandas, P.I.T. and Conrad, J.M. (2012) A Survey of
Quadrotor Unmanned Aerial Vehicles. 2012 Proceedings of IEEE,
Orlando, FL, 15-18 March 2012, 1-6. http://dx.doi.org/10.1109/
secon.2012.6196930
2. Zeng, Y., Zhang, R. and Lim, T.J. (2016) Wireless Communications
with Unmanned Aerial Vehicles: Opportunities and Challenges.
3. Mustapa, M.Z. (2015) Altitude Controller Design for Quadcopter UAV.
Jurnal Teknologi.
4. Guclu, A. (2012) Attitude and Altitude Control of an Outdoor
Quadrotor. The Graduate School of Natural and Applied Science,
Atilim University, 76.
5. Ononiwu, G.C., Onojo, O.J., Chukwuchekwa, N. and Isu, G.O.
(2015) UAV Design for Security Monitoring. International Journal of
Emerging Technology & Research, 2, 16-24.
6. Hochstenbach, M. and Notteboom, C. (2015) Design and Control of
an Unmanned Aerial Vehicle for Autonomous Parcel Delivery with
Transition from Vertical Take-off to Forward Flight. International Journal
of Micro Air Vehicles, 7, 395-405. http://dx.doi.org/10.1260/1756-
8293.7.4.395
7. Min, B.-C., Hong, J.-H. and Matson, E.T. (2011) Adaptive Robust
Control (ARC) for an Altitude Control of a Quadrotor Type UAV
Carrying an Unknown Payloads. 2011 11th International Conference
on Control, Automation and Systems, Korea, 26-29 October 2011,
1147-1151.
SECTION 2
TRAJECTORY CONTROL
TECHNIQUES
CHAPTER
5
Advanced UAV Trajectory Generation:
Planning and Guidance
INTRODUCTION
As technology and legislation move forward (JAA & Eurocontrol, 2004)
remotely controlled, semi-autonomous or autonomous Unmanned Aerial
Systems (UAS) will play a significant role in providing services and
enhancing safety and security of the military and civilian community at
large (e.g. surveillance and monitoring) (Coifman et al., 2004). The potential
Citation: Antonio Barrientos, Pedro Gutierrez and Julian Colorado (January 1st 2009).
“Advanced UAV Trajectory Generation: Planning and Guidance”, Aerial Vehicles,
Thanh Mung Lam, IntechOpen, DOI: 10.5772/6467.
Copyright: © 2009 The Author(s). Licensee IntechOpen. This chapter is distributed
under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0
License
80 Unmanned Aerial Vehicles (UAV) and Drones
market for UAVs is, however, much bigger than just surveillance. UAVs
are ideal for risk assessment and neutralization in dangerous areas such as
war zones and regions stricken by disaster, including volcanic eruptions,
wildfires, floods, and even terrorist acts. As they become more autonomous,
UAVs will take on additional roles, such as air-to-air combat and even
planetary science exploration (Held et al., 2005).
As the operational capabilities of UAVs are developed there is a
perceived need for a significant increase in their level of autonomy,
performance, reliability and integration with a controlled airspace full of
manned vehicles (military and civilian). As a consequence researchers
working with advanced UAVs have moved their focus from system
modeling and low-level control to mission planning, supervision and
collision avoidance, going from vehicle constraints to mission constraints
(Barrientos et al., 2006). This mission-based approach is most useful for
commercial applications where the vehicle must accomplish tasks with a
high level of performance and maneuverability. These tasks require flexible
and powerful trajectory-generation and guidance capabilities, features
lacking in many of the current commercial UAS. For this reason, the
purpose of this work is to extend the capabilities of commercially available
autopilots for UAVs. Civil systems typically use basic trajectory-generation
algorithms, capable only of linear waypoint navigation (Rysdyk, 2003),
with a minimum or non-existent control over the trajectory. These systems
are highly constrained when maneuverability is a mission requirement. On
the other hand, military researchers have developed algorithms for high-
performance 3D path planning and obstacle avoidance (Price, 2006), but
these are highly proprietary technologies that operate with different mission
constraints (target acquisition, threat avoidance and situational awareness)
so they cannot be used in civil scenarios.
This chapter presents a robust Trajectory Generation and Guidance
Module (TG2M), a software tool capable of generating complex six-degrees-
of-freedom trajectories in 3D space with velocity, acceleration, orientation
and time constraints. The TG2M is an extension module to the Aerial
Vehicle Control Language (AVCL), a software architecture and interpreted
language specification that address the issues of mission definition, testing,
validation and supervision for UAVs (Barrientos et al., 2006). The AVCL
platform incorporates a 3D visual simulator environment that uses a
Geographic Information System (GIS) as the world’s data model. The GIS
backend means all objects are geo-referenced and that several official and
commercial databases may be used for mission planning (roads, airports,
Advanced UAV Trajectory Generation: Planning and Guidance 81
power lines, crop fields, etc.). The language specification contains a wide
set of instructions that allow the off/on-line supervision and the creation of
vehicle-independent missions.
The TG2M is designed to model 3D cartesian paths with parametric
constraints. The system uses two types of mathematical descriptors for
trajectories: analytical functions and polynomial interpolation. The two
main contributions of the module are its geometrical representation of the
trajectory and its parametric definition. Simple maneuvers like lines and
circumference arcs are created with analytical functions that constrain the
geometry of the desired path; then the parametric constraints are applied.
These constraints are typically kinematic: constant velocity and acceleration,
trapezoidal curves to accelerate or stop, etc. More complex maneuvers are
described with polynomial interpolation and fitted to the critical control
path points, meeting desired position, time and velocity constraints. These
polynomial functions are based on third and fourth order splines with fixed
boundary conditions (for example initial and final velocities), which join all
control points with a continuous and smooth path (Jaramillo-Botero et al.,
2004).
The section 2 of this chapter presents some of the current techniques
extracted from the military aerial survey applications, showing complex
mission-planning tools capable of addressing the mission-specific constraints
required. Within those approaches, this section also introduces a brief
description of the AVCL architecture, describing its components, modules,
and the main features provisioned, addressing a novel way of human-mission
planning definition and testing. The section 3 introduces the AVCL built-in
TG2M framework, describing the different techniques used for the trajectory
planning and guidance of UAVs, as well as the mathematical treatment of
these methods (analytical functions and polynomial interpolation). On the
other hand, simulation-based results (see section 4) using a mini-helicopter
simulator embedded into the AVCL environment will show the capabilities
of the TG2M while flying aggressive and simple maneuvers. Last but
not least, “final observations” in section 5 includes comments about the
TG2M framework and the upcoming additions to the TG2M under current
development.
MOTION-PLANNING METHODOLOGIES
A planning algorithm should provide feasible and flyable optimal trajectories
that connect starting with target points, which should be compared and
82 Unmanned Aerial Vehicles (UAV) and Drones
valued using specific criteria. These criteria are generally connected to the
following major concerns, which arise during a plan generation procedure:
feasibility and optimality. The first concern asks for the production of a plan
to safely “move” the UAV to its target state, without taking into account the
quality of the produced plan. The second concern asks for the production
of optimal, yet feasible, paths, with optimality defined in various ways
according to the problem under consideration (LaValle, 2006). Even in
simple problems searching for optimality is not a trivial task and in most
cases results in excessive computation time, not always available in real-
world applications. Therefore, in most cases we search for suboptimal or
just feasible solutions. The simplest way to model an UAV path is by using
straight-line segments that connect a number of waypoints, either in 2D
or 3D space (Moitra et al., 2003, Zheng et al., 2005). This approach takes
into account the fact that in typical UAV missions the shortest paths tend to
resemble straight lines that connect waypoints with starting and target points
and the vertices of obstacle polygons. Although waypoints can be efficiently
used for navigating a flying vehicle, straight-line segments connecting the
corresponding waypoints cannot efficiently represent the real path that will
be followed by the vehicle due to the own kinematics of the traced path. As a
result, these simplified paths cannot be used for an accurate simulation of the
movement of the UAV in an optimization procedure, unless a large number
of waypoints is adopted. In that case the number of design variables in the
optimization procedure explodes, along with the computation time. This is
why this section presents some background based on the state-of-the-art
mission/path planning for UAVs. The purpose (apart from the state-of-the-
art survey) is to compare to the AVCL architecture in order to observe how
its TG2M embedded framework (see section 3 for details) is a complement
of some lacking approaches found in this specialized literature.
Background
Researchers at top research centers and universities (e.g. JPL at Caltech,
Carnegie Mellon, NASA, ESA, among others) are dealing with the
development of new strategies that allow high-level mission UAV planning
within the desired flight performance. As examples of these approaches
we present the NASA Ames Research Center, the Georgia Institute of
Technology, and the University of Illinois.
For the past decade NASA has focused on developing an efficient and
robust solution to Obstacle Field Navigation (OFN), allowing a fast planning
of smooth and flyable paths around both known and unknown obstacles
Advanced UAV Trajectory Generation: Planning and Guidance 83
streaming video from the camera and the output of the image processor.
This allows the operator to monitor the efficiency of the image processing
as well as visually document the results of the search in the final phase of
the mission.
For other projects the main goal is the unification of high-level mission
commands and development of generic languages that support online UAV
mission planning without constraining the system to a single vehicle. The
University of Illinois at Urbana-Champaign (Frazzoli, 2002) presents a new
framework for UAV motion planning and coordination. This framework
is based on the definition of a modelling language for UAV trajectories,
based on the interconnection of a finite number of suitably defined motion
primitives, or maneuvers. Once the language is defined, algorithms for the
efficient computation of feasible and optimal motion plans are established,
allowing the UAV to fulfill the desired path.
If we analyze the UAS described above, almost all of them share one
important limitation: their software architecture is tightly coupled to one
vehicle and the capabilities of its lowlevel controller. Civil applications require
open and extendable software architectures capable of talking to vehicles
from different suppliers. The AVCL addresses those limitations, allowing to
model different vehicles into a single common language (vehicleindependent
missions). In the same fashion the described vehicles show that complex
and simple maneuvers could be a suitable solution depending on the kind
of mission to fulfill. For this reason the AVCL is extended with the TG2M
framework, capable of generating simple and complex 3D paths with the
necessary vehicle constrains. The next sub-section introduces the AVCL
architecture and some of the features provisioned.
some coefficients that determine the spline used to interpolate the numerical
data points. These coefficients bend the line so that it passes through each
of the data points without any erratic behavior or breaks in continuity. The
essential idea is to define a third-degree polynomial function S (t) of the
form:
(1)
This third-degree polynomial needs to conform to the following conditions
in order to interpolate the desired knot-points as depicted in Fig. 3:
(2)
Also, to make the curve smooth across the interval, the first derivative
of the S (t)function must be equal to the derivative of the position reference
points; this yields:
88 Unmanned Aerial Vehicles (UAV) and Drones
(3)
Applying the same approach for the second derivative:
(4)
For the solution of the polynomial coefficients in Eq. (1), the system in
Eq. (5) must be solved as:
(5)
where the matrix A ℜnxn (n is the number of knot-points to interpolate)
corresponds to:
(6)
The term h ℜn in Eq. (6) is the time vector defined for each point (P0
−1
,P1,....Pn ). The bi coefficients in Eq. (1) are stacked in Y ℜnx 3 , which yields
the term f ℜnx 3 as:
(7)
From Eq. (2) and (3), the ai and ci coefficients are respectively obtained as:
(8)
Advanced UAV Trajectory Generation: Planning and Guidance 89
(9)
The same system A⋅Y = f must be solved with the new A ∈ℜ from Eq. nxn
(9) and f ∈ℜnx3 defined in Eq. (10). Note that the first derivative of the S (t)
function has been added in the first and last position of the f vector, allowing
the control of the initial and final velocities of the curve.
(10)
The first procedure is to obtain the time vector as follows:
Finally, the two polynomials for each (x, y, z) component are evaluated
from the time [0, 10] to [10, 20]. Figure 4 shows the results.
(11)
The same 3D-spline conditions previously described also apply to the
4D-spline. To obtain a generalized solution of the system to solve ( A⋅Y = f
), we start from the three-point case as depicted in Fig. 5. The polynomials
for each trajectory segment as a function of time t are:
(12)
Taking the first and second derivatives (velocities and accelerations),
we obtain:
(13)
The second derivatives of Eq. (12) yield a set of accelerations. Equaling
the acceleration functions for the intermediate points (t1 for each case) and
setting to zero the initial and final acceleration of the path segment yields:
(14)
Equations (12), (13) and (14) conform the complete system in order to
obtain the ten polynomial coefficients. Solving A⋅Y = f , the matrix A ∈ℜ5
(n−1) x 5 (n−1)
corresponds to:
92 Unmanned Aerial Vehicles (UAV) and Drones
(15)
The polynomial coefficients are stacked into the Y ∈ℜ5(n−1) vector, and
with the f ∈ℜ5 (n−1) term, the total system is defined from Eq. (14) and (15):
(16)
Example 2. UAV trajectory planning using 4D splines with total velocity
control Lets define three knot-points as: P0 = [0,0,0], P1 = [5,7,5], P2 = [4,
2,10] in the time t = [0,4,8] (note that the time vector components are given
in seconds). Calculate the 4D spline considering the following velocity
profile: V0 = [0,0,0], V1 = [0.5,0.8,1], V2 = [1,1.5,1].
Solving the system: Y = A −1 f , we obtain the following numerical data:
Finally, the two polynomials for each (x, y, z) component are evaluated
from the time [0, 4] to [4, 8] seconds. Figure 6 shows the results:
Advanced UAV Trajectory Generation: Planning and Guidance 93
(17)
Once that the intermediate points have been defined, the three segments
that compose the total function are defined by Eq. (18). In the first section
(from P0 to P(τ )) the initial velocity is set Vi = 0 and progresses toward a
final velocity Vk . The second segment (from P(τ ) to P (T − τ )) is traced at
constant maximum velocity Vk . Finally the last segment (from P (T − τ )to
P T( )) drives the vehicle from Vk to zero velocity.
(18)
The same approach is applied to generate circumferences with the
trapezoidal profile used for the straight-lines. Three knot-points define the
circumference path and the objective is to find the trace angle across the
trajectory.
(19)
The center of the circle c (xc , yc ,z c ) is always equidistant to any point
over the circle, then:
(20)
To obtain the relation between the angle motion (θ ), the angular velocity
(ω ) and the tangential velocity ( vt ), constant velocity is assumed across the
path, yielding:
(21)
Likewise, the relation between the angle motion (θ ) and the angular
acceleration of the curve is given by:
(22)
If the known parameter is the total motion time (T) of the trajectory, we
set the acceleration term to the left-side of the equation, yielding:
96 Unmanned Aerial Vehicles (UAV) and Drones
(23)
Otherwise, if the constrained parameters are the initial and final velocity,
the acceleration function is given by:
(24)
Example 3. UAV straight-line trajectory with trapezoidal velocity profile
Lets defined a straight-line trajectory starting from P0 = [0,0, 20] to P (T )=
[10,0, 20] with 10 meters of displacement in the X coordinate at a constant
altitude of 20 meters. Define the 3D cartesian trajectory with a maximum
velocity of 1m/s taking into account a trapezoidal velocity profile with 30%
of acceleration time.
The distance vector to trace is: dist = P (T)− P0 = [10,0,0].
The total velocity is obtained from the norm of the vector dist: VT = dist
= 10.
The first segment of the path (from P0 to P(τ )) is calculated using Eq.
(17):
Finally the last segment drives the vehicle from Vk to zero end velocity.
The time vector is:
UAV Guidance
The geometrical representation of a UAV trajectory has been presented
in the previous sub-section. Complex trajectories may be described using
third-fourth order degree splines, or simple maneuvers may be generated
using common lines and circumferences functions with some parametric
features defined by the end-user.
But in order to complete the generation of trajectories for a single UAV
a guidance module is required.
Advanced UAV Trajectory Generation: Planning and Guidance 99
This vector is derived from the UAV position error between the current and
desired positions. Finally, the velocity references Vxref and Vyref are the
components of the vector Vt +Ve.
The guidance module must take into account that the helicopter’s orientation
(yaw) is independent from the direction of travel. The high-level modules
must generate a set of orientations across the vehicle’s trajectory. The built-
in AVCL control module (see Fig. 10) is capable of receiving velocity and
yaw angle orientation commands from the TG2M module and generating the
needed commands for the attitude controller that governs the vehicle’s roll
and pitch. The TG2M’s guidance module focuses on the generation of yaw
references, and use a simple Proportional “P” controller for smooth yaw
angle transition during the flight. Two methods are use to generate the yaw
angle references: for simple maneuvers the yaw angle is calculated using
simple trigonometric relations due to the path displacement. For complex
trajectories using splines we introduce a feasible method to calculate a
smooth yaw angle evolution. This method also shows how to calculated roll
and pitch references due to the vehicle trajectory and its velocity. For roll,
pitch and yaw angles calculation, the following frame of reference is used:
(25)
In Eq. (25), η1 denotes the position of the center of mass CM of the
vehicle and η2 its orientation described by the Euler angles with respect to
the inertial frame {I}. The vector v1 refers to the linear velocity and v2 to the
Advanced UAV Trajectory Generation: Planning and Guidance 101
angular velocity of vehicle frame {B} with respect to inertial frame {I}. In
order to express the velocity quantities between both frames of references
(from {B} to {I} and vice versa), the following transformation matrix is
used:
(26)
The body-fixed angular velocities and the rate of the Euler angles are
related through:
(27)
The position and the magnitude of the velocity vector at a point P on the
trajectory are given by :
(28)
The method to define the Euler angles is based on the Frenet-Serret theory
(Angeles, 1997). To every point of the curve we can associate an orthonormal
triad of vectors (a set of unit vectors that are mutually orthogonal) namely
the tangent et , the normal en and the bionormal eb (see Fig. 13). The Frenet-
Serret theory says that by properly arranging these vectors in a matrix ∈ℜ3x 3
, we obtain a description of the curve orientation due to the position, velocity
and acceleration of the UAV while tracing out the path. The unit vectors are
then defined as:
(29)
In the definition of a frame associated with the curve the original
definition of the Frenet frame for counterclockwise rotating curves is used;
in the case of a clockwise rotating curve, the z-axis of the Frenet frame
points in the opposite direction upwards than the inertial {I} frame. So in
order to define small relative rotation angles for the orientation of a vehicle
102 Unmanned Aerial Vehicles (UAV) and Drones
Figure 13. The inertial, the Frenet, the vehicle and the curve Frames
Collectively we denote the Frenet and the rotated frame as the “curve”
frame {C}. According to the notation of rotational transformations used in
robotics literature, we can express the coordinates of a vector given in the
curve frame {C} to the {I} frame with the matrix:
(30)
For a counterclockwise rotation: . Likewise,
the rotation of the {B} frame from the {C} frame to the reference {R} frame
can be expressed using customary aeronautical notation by considering the
sideslip angle β and angle of attack α, (Siouris, 2004):
(31)
The vector vR refers to the y-axis velocity component in the reference
frame and wR,uR to the z and x - axis respectively. The overall rotation is
composed by a rotation about body zB axis through the angle β , followed
by a rotation about the body yB through the angle α , which is expressed as:
Advanced UAV Trajectory Generation: Planning and Guidance 103
(32)
Finally, the roll, pitch and yaw angles can be deduced as follows:
(33)
Where ri, j represent the components of the rotation matrix RIR ∈ℜ3x 3
. Computing the previous methodology, we use 3D splines with fixed
boundary conditions in order to generate a complicated path as shown in
Fig. 11. Seven knot-points have been used (distance are in meters): P=[0 0 0;
5 1 2; 10 5 5; 15 10 10; 10 15 15; 5 10 20; 0 8 20; 5 0 20]. Eq. (33) has been
used to obtain the UAV orientation with respect to the Inertial Frame as:
Figure 14. Roll, Pitch and Yaw angle references for UAV guidance
For complex maneuvers the Frenet theory allowed to define smooth yaw
references as well as roll and pitch angles if it is required. Nevertheless, the
computational cost of calculating those equations could decrease the system
performance if the number of knot-points of the path is large. For this reason,
normal maneuvers such as straight-lines use simple trigonometric theory
to obtain the UAV orientation. On the other hand, the end-user is able to
define the kind of orientation of the vehicle, this means that the vehicle is not
constrained to be oriented just by the trajectory direction, hence, it will be
104 Unmanned Aerial Vehicles (UAV) and Drones
able to trace out the trajectory oriented towards to any fixed point defined.
The yaw angle defined by the term ψ is given by:
(34)
Where xdiff , ydiff correspond to the difference between the target fixed point
and the current position of the UAV. In addition, depending of the motion
quadrant, the term ψ in Eq. (34) must be fixed, this means that for the {x-
,y+} and the {x- ,y-} quadrant, the yaw angle is ψ = ψ +π , otherwise, for the
{x+,y-} quadrant, ψ = ψ + 2π .
Figure 15 shows a circumference-arc generated using Eq. (24) as well as
the yaw evolution of the UAV oriented towards the center of the arc located
at [0,0] coordinate in the x-y plane using the previosly theory described in
Eq. (34).
This section has successfully introduced the mathematical treatment and
methods for the generation of complex trajectories and simple maneuvers
using the available theory reported in specialized literature (LaValle S.M,
2006), (Jaramillo-Botero et al., 2004). Geometrical trajectory generation
and some techniques for its parameterization based on polynomial splines
and function with trapezoidal velocity profile are an interest solution for this
problem, actually, some of these methodologies are used for complex UAV
trajectory definition nowadays. The novel solution presented in this book
is the integration of these methods into a powerful environment that allows
high-level user control of complex missions. The AVCL definitively brings
those features and the next section will introduce some tests using the AVCL
interpreter and the simulation environment.
TG2M SIMULATION
Results As shown in Fig. 1 the Mission Planner (MP) has two similar loops
for mission planning and simulation/supervision. The difference is that in
the Planning Loop the interpreter sends the projected waypoints back to
the MP’s Enhanced Reality, while in the Simulation Loop the interpreter
commands the simulated vehicle, which in turn sends the simulated positions
to the MP. Our research group has developed a Simulink-based model of
a UAV helicopter named Vampira, which includes a position controller
and is capable of real-time simulation. This simulator has been used with
the Mission Planning and Simulation tool to test the TG2M. For Mission
Supervision the AVCL commands would be sent to the real vehicle, and its
position plotted alongside the projected and/or simulated paths. The Vampira
helicopter was built within the framework of the project: “Guidance and
Control of an Unmanned Aerial Vehicle” DPI 2003 01767 (VAMPIRA),
granted by the Spain Ministry of Science and Technology, and it will be
used for the real-world tests of the built-in TG2M framework. Figure 16
shows the Vampira prototype, which includes: a GPS, Wi-Fi link, IMU, and
a PC104 computer for the low-level control (main rotor and tail servos).
The Vampira’s dynamics model has been obtained, identified and validated
in previous works (Valero, 2005), (del Cerro et al., 2004). This work takes
advantage from the AVCL simulation capabilities in order to validate the
TG2M framework theory for trajectory planning.
Figure 18. Test1: Cartesian UAV position and velocities with respect to the
Inertial Frame
The UAV started from initial point located at (0, 0, 0) coordinate and
finished its trajectory at (6, -7, 20). Visual simulation depicted in Fig 17,
showed smooth motion across the trajectory due to the 3D spline approach.
Nonetheless, 3D splines just allow the user to define the initial and final
Advanced UAV Trajectory Generation: Planning and Guidance 107
velocities of the motion, lacking of velocity control for the rest of the
knotpoints.
FINAL OBSERVATIONS
For modeling continuous cartesian trajectories in the AVCL, several analytical
functions and polynomial interpolation methods are available; all of which
can be used in any combination. The TG2M module handles the definition
of trajectories using knot control points as well as the incorporation of path
constraints. It also supports the definition of complex tasks that require
the construction of trajectories made up of different primitive paths in any
combination of analytical and interpolated functions. The user-designed
spatial trajectories can be visualized in three dimensions on the display
window or plotted versus time using the embedded plotting library.
Simulation results have shown that the TG2M module works perfectly
for the definition and testing of wide kind of smooth trajectories, allowing
the user a high-level control of the mission due to the AVCL interpreter.
The three different scenarios used for testing, allowed verifying that the
mathematical framework used for the trajectory generation and guidance
was really working during simulation flight. Percentage errors during
maneuver execution were minimal, maintaining the UAV at the desired
velocity limits and within the established path. We also incorporated velocity
error fixing during flight. For high altitude tests, the velocity of the wind
plays a mandatory role as a main disturbance external force. The TG2M
module includes wind perturbation compensation. The Guidance module
fixes the velocity commands in real-time flight maneuver, decreasing the
error position tracking. For the three scenarios tests, the AVCL simulation
environment includes normal wind conditions during simulation flight,
introducing small perturbations into the UAV equations of motion. As shown
in the obtained results, those perturbations were compensated, allowing the
UAV to follow the desired trajectory within the less error as possible.
The Frenet-Serret formulas included for the UAV orientation also
presented a good approach in order to obtain smooth UAV rotation rate
during flight. The use of simple trigonometric theory to obtain and define
the UAV orientation profile (Yaw angle) is not convenient for complex
maneuvers. Splines sometimes require a lot of know-points for feasible
trajectory guidance, hence, using these polynomial equations, the Frenet
approach allowed smooth angle changes between knot-points, which it had
not been obtained with the simple trigonometric angle calculation.
110 Unmanned Aerial Vehicles (UAV) and Drones
REFERENCES
1. JAA & Eurocontrol,.A concept for the European Regulations for Civil
Unmanned Aerial Vehicles. UAV Task-Force Final Report. 2004
2. Coifman, B., McCord, M., Mishalani, M., Redmill, K., Surface
Transportation Surveillance from Unmanned Aerial Vehicles. Proc. of
the 83rd Annual Meeting of the Transportation Research Board, 2004.
3. Held, Jason M; Brown, Shaun and Sukkarieh, Salah. Systems
for Planetary Exploration. I5th NSSA Australian Space Science
Conference, pp. 212-222, ISBN: 0864593740. RMIT University,
Melbourne, Australia, from 14 to 16 September 2005.
4. Gutiérrez, P., Barrientos, A., del Cerro, J., San Martin., R. Mission
Planning and Simulation of Unmanned Aerial Vehicles with a GIS-based
Framework; AIAA Guidance, Navigation and Control Conference and
Exhibit. Denver, EEUU, 2006.
5. Rysdyk, R. UAV path following for constant line-of-sight. In 2th AIAA
Unmanned Unlimited. Conf. and Workshop and Exhibit, San Diego,
CA, 2003.
6. Price, I.C, Evolving self organizing behavior for homogeneous and
heterogeneous swarms of UAVs and UCAVS. PhD Thesis, Graduate
School of Engineering and Management, Air Force Institute of
Technology (AU), Wright-Patterson AFB, OH, March 2006.
7. Herwitz, S. Developing Military COEs UAV applications. UAV
Applications Center – NASA Ames Research Center, Moffett Field,
CA. 2007.
8. Alison A. P., Bryan G., Suresh K. K., Adrian A. K., Henrik B. C.
and Eric N. J. Ongoing Development of an Autonomous Aerial
Reconnaissance System at Georgia Tech . UAV Laboratory, School of
Aerospace Engineering. Georgia Institute of Technology, 2003
9. Jaramillo-Botero A., Correa J.F., and Osorio I.J., Trajectory planning in
ROBOMOSP, Robotics and Automation Group GAR, Univ. Javeriana,
Cali, Colombia, Tech. Rep. GAR-TR-10-2004, 2004.
10. LaValle, S.M.: Planning Algorithms. Cambridge University Press
(2006)
11. Moitra, A., Mattheyses, R.M., Hoebel, L.J., Szczerba, R.J., Yamrom,
B.: Multivehicle 5reconnaissance route and sensor planning. IEEE
Transactions on Aerospace and Electronic Systems, 37 (2003).
12. Zheng, C., Li, L., Xu, F., Sun, F., Ding, M.: Evolutionary Route Planner
Advanced UAV Trajectory Generation: Planning and Guidance 111
6
Modelling and Control Prototyping of
Unmanned Helicopters
INTRODUCTION
The idea of using UAV’s (Unmanned Aerial Vehicles) in civilian applications
has created new opportunities for companies dedicated to inspections,
surveillance or aerial photography amongst others. Nevertheless, the main
drawback for using this kind of vehicles in civilian applications is the
enormous cost, lack of safety and emerging legislation. The reduction in
the cost of sensors such as Global Positioning System receivers (GPS) or
Citation: Jaime del-Cerro, Antonio Barrientos and Alexander Martinez (January 1st
2009). “Modelling and Control Prototyping of Unmanned Helicopters”, Aerial Vehi-
cles, Thanh Mung Lam, IntechOpen, DOI: 10.5772/6468.
Copyright: © 2009 The Author(s). Licensee IntechOpen. This chapter is distributed
under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0
License
114 Unmanned Aerial Vehicles (UAV) and Drones
Figure 1. Benzin trainer by Vario with GNC (Guidance Navigation & Control)
system onboard
A full structure of control has been developed and tested by using a
proposed fast design method based on Matlab-Simulink for simulation and
adjustment in order to demonstrate the usefulness of the developed tool. The
proposed architecture describes low level control (servo actuators level),
attitude control, position, velocity and maneuvers. Thus there are commands
such as straight flying by maintaining a constant speed or maneuvers such
as flying in circles i.e.
The results have confirmed that hybrid model with evolutionary
algorithms for identification provides outstanding results upon modeling a
helicopter. Real time simulation allows using fast prototyping method by
obtaining direct code for onboard controllers and consequently, reducing the
risk of bugs in its programming.
MODEL DESCRIPTION
Mathematical models of helicopter dynamics can be either based on
analytic or empirical equations. The former applies to the physical laws
of aerodynamics, and the latter tries to define the observed behavior with
simpler mathematical expressions.
The proposed model is a hybrid analytic-empirical model that harnesses
the advantages of both: high-velocity and simplicity of the empirical method,
as well as fidelity and the physical meaning of the analytic equations.
Main Rotor
It is modeled with analytic equations, derived form research on full-scale
helicopters [Heffley 2000], which can be used to model small helicopters
without fly-bars. The iterative algorithm computes thrust and induced wind
speed while taking into account geometrical parameters, speed of the blades,
commanded collective, roll and pitch cyclic.
One of the most important concepts to be known is the relationship
among the Torque, Power and Speed. Such response arises from the induced
velocity of the air passing through the disk of the rotor.
(1)
(2)
The output of the block is the rotor’s thrust (F). R and Ω represent the
radius of the rotor and its angular speed respectively; ρ is the air density, and
a, b and c are geometrical blade factors. The relationship between thrust and
angular rates has been derived from observation; therefore the model is also
empirical.
(3)
(4)
(5)
When the helicopter flies close to the ground (distances less than 1.25
times the diameter of the rotor) the ground effect turns to be very important.
This effect has been modeled using the parameter η defined in (6) where h is
the distance from the helicopter to the ground.
(6)
Modelling and Control Prototyping of Unmanned Helicopters 121
In these cases, the thrust is modified using (7) where Th’ is the resulting
thrust after correcting Th. Values of To, T1 and T2 have been calculated for
no creating a discontinuity when h is 1.25 times the diameter of the rotor.
(7)
By other hand, the commanded Roll and Pitch cyclic have been considered
as reference for rotations in x and y axes considering the helicopter frame.
In addition to this, a coupling effect has been considered for simulating real
coupling in the helicopter.
Tail Rotor
The algorithm for estimating the thrust provided by tail rotor is similar to
the main one but only the pitch angle of the blades has been considerated
as input. This signal is provided by the hardware controller that is in charge
of stabilization of the tail. A PI classical controller has been used to model
the controller and the sensor has been considered as no dynamic as Figure
9 shows.
Fuselage
In a first step, all the forces (main and tail rotor, gravity, friction and
aerodynamic resistances) have to be taking into account for computing the
movement of the fuselage of the helicopter. After that, accelerations and
velocities can be estimated.
Forces due to the aerodynamic frictions are estimated using the relative
velocity of the helicopter with respect to the wind applying (8), where Ar is
the equivalent area and vw is the wind velocity.
(8)
The resulting equations are summarized in (9).
(9)
Where the prefix Fus means aerodynamic forces on fuselage and T the
main rotor thrust. The prefix M means Torque where suffix d denotes main
Modelling and Control Prototyping of Unmanned Helicopters 123
rotor, the t denotes tail rotor and the v is the effect of the air in the tail of the
helicopter. The prefix G means the gravity components.
Once forces and torques have been calculated, accelerations can be
obtained and therefore velocities. Once the velocity referred to helicopter
frame (vh) is calculated, a transformation is required in order to obtain an
absolute value by using (10).
(10)
Flybars are important elements to take into account in the dynamic
model. In this work, they have been modeled by using empiric simple
models, and the stabilization effect that they produced on the helicopter has
been simulated using (11). A more realistic model of flybars can be obtained
in [Mettler-2003].
(11)
The main steps of the process for identification using GA’s have been:
Parameters codification
The parameters are codified with real numbers, as it is more intuitive format
than large binary chains. Different crossover operations have been studied:
• The new chromosome is a random combination of two chains.
Initial population
The initial population is created randomly with an initial set of parameters
of a stable model multiplied by random numbers selected by a Monte-Carlo
algorithm with a normal distribution with zero mean and standard deviation
of 1.
The genetic algorithm has been tested with different population sizes,
between thirty and sixty elements. Test results showed that the bigger
population did not lead to significantly better results, but increased the
computation time. Using 20 seconds of flight data and a population of 30
chromosomes, it took one hour to process 250 iterations of the genetic
algorithm on a Pentium IV processor. Empirical data suggests that 100
iterations are enough to find a sub-optimal set of parameters.
Fitness function
The fitness function takes into consideration the following state variables:
roll, pitch, yaw and speed. Each group of parameters (chromosome)
represents a model, which can be used to compute simulated flight data
from the recorded input commands. The difference between simulated and
real data is determined with a weighted least-squares method and used as the
fitness function of the genetic algorithm.
In order to reduce the effect of the error propagation to the velocity
due to the estimated parameters that have influence in attitude, the global
126 Unmanned Aerial Vehicles (UAV) and Drones
process has been decomposed in two steps: Attitude and velocity parameters
identification.
The first only identifies the dynamic response of the helicopter attitude,
and the genetic algorithm modifies only the parameters related to the
attitude. Once the attitude-related parameters have been set, the second
process is executed, and only the velocity-related parameters are changed.
This algorithm uses the real attitude data instead of the model data so as not
to accumulate simulation errors. Using two separate processes for attitude
and velocity yields significantly better results than using one process to
identify all parameters at the same time.
The parameters related to the attitude process are: TwstTr, IZ, HTr,
YuuVt, YuvVt, Corr1, Corr2, Tproll, Tppitch, Kroll, Kpitch, Kyaw, DTr,
DVt, YMaxVt, KGyro, KdGyro, Krp, Kpr, OffsetRoll, OffsetPitch and
OffsetYaw. The parameters related to the vehicle’s speed are TwstMr, KCol,
XuuFus, YvvFus, ZwwFus and OffsetCol.
The fitness functions are based on a weighted mean square error equation,
calculated by comparing the real and simulated responses for the different
variables (position, velocity, Euler angles and angular rates) applying a
weighting factor.
Data acquisition
The data acquisition procedure is shown in Figure 11. Helicopter is manually
piloted using the conventional RC emitter. The pilot is in charge to make the
helicopter performs different maneuvers trying to excite all the parameters
of the model. For identification data is essential to have translational flights
(longitudinal and lateral) and vertical displacements. All the commands
provided by the pilot are gathered by a computer trough to a USB port by
using a hardware signal converter while onboard computer is performing
data fusion from sensors and sending the attitude and velocity estimation to
the ground computer using a WIFI link.
In this manner inputs and outputs of the model are stored in files to
perform the parameters identification.
IDENTIFICATION RESULTS
The identification algorithm was executed several times with different
initial populations and, as expected, the sub-optimal parameters differ.
Some of these differences can be explained by the effect of local minimum.
Although the mutation algorithm reduces the probability of staying inside a
local minimum, sometimes the mutation factor is not big enough to escape
from it.
128 Unmanned Aerial Vehicles (UAV) and Drones
CONTROL PROTOTYPING
Mission control
In this level, the operator designs the mission of the helicopter using GIS
(Geographic Information System) software.
These tools allow visualizating the 3D map of the area by using Digital
Terrain Model files and describing a mission using a specific language.
(Gutiérrez et al -06). The output of this level is a list of maneouvers (parametric
commands) to be performed by the helicopter.
Maneuver Control
This control level is in charge of performing parametric manoeuvres such as
132 Unmanned Aerial Vehicles (UAV) and Drones
Velocity Control
The velocity control is in charge of making the helicopter maintains the
velocity that maneouvers control level computes. The velocity can be referred
to a ENU (East-North-Up) frame or defined as a module and a direction.
Therefore, a coordinate transformation is required to obtain lateral and
frontal velocities (vl, vf) that are used to provide Roll and Pitch references.
This transformation is very sensitive to the yaw angle estimation.
Modelling and Control Prototyping of Unmanned Helicopters 133
The second one arises from the solution of the first problem. Thus, when
operator performs a manual-automatic swiching by using channel nine of
the emitter, integral action needs to be reset for reducing the effect of the
error integration when manual mode is active.
Attitude Control
The attitude control is in charge of providing with commands to the servos (or
hardware controllers). Several control techiques were tested in this level, but
probably fuzzy controllers turned out to have the best performance. Figure
20 shows the proposed architecture and Figure 21 the control surfaces used
by fuzzy controller.
As it can be observed, the Yaw have been isolated from the multivariable
Roll-Pitch controler with a reasonable quality results.
Figure 23. Roll and pitch Control results during real flight
CONTROL RESULTS
Figure 23 shows the results obtained for attitude control during an eighty-
seconds real flight with the Vario helicopter.
Lines in red show the real attitude of the helicopter while green ones are
the references provided by the velocity control level.
Dotted blue line indicates when the helicopter is in manual (value of
1) or automatic (value of 0). It can be observed that, when helicopter is in
manual mode, the red line does not follow the references of the automatic
control, because the pilot is controlling the helicopter. Figure 24 shows the
obtained results for velocity control. The same criteria that in last graphic
has been used, thus red line indicates real helicopter response and green
means references from maneuvers control level. Dotted blue line is used
to show when the helicopter flies autonomously or remote piloted. Small
delays can be observed.
Modelling and Control Prototyping of Unmanned Helicopters 137
an automatic coding tool. Future work will focus on the following three
objectives:
• To develop a model of the engine in order to improve the model
behavior for aggressive flight maneuvers.
• To compare this model with others in the literature in a wide
range of test cases.
• To validate the proposed model with different helicopters.
140 Unmanned Aerial Vehicles (UAV) and Drones
REFERENCES
1. Castillo P, Lozano R, Dzul A. (2005) Modelling and Control of
Mini-Flying Machines. Springer Verlag London Limited 2005.
ISBN:1852339578.
2. Del-Cerro J, Barrientos A, Artieda J, Lillo E, Gutiérrez P, San- Martín
R, (2006) Embedded Control System Architecture applied to an
Unmanned Aerial Vehicle International Conference on Mechatronics.
Budapest-Hungary
3. Gavrilets, V., E. Frazzoli, B. Mettler. (2001) Aggresive Maneuvering of
Small Autonomous Helicopters: A. Human Centerred Approach. The
International Journal of Robotics Research. Vol 20 No 10 , October
2001. pp 795-807
4. Godbole et al (2000) Active Multi-Model Control for Dynamic
maneuver Optimization of Unmanned Air Vehicles. Proceedings of the
2000 IEEE International conference on Robotics and Automation. San
Francisco, CA. April 2000.
5. Gutiérrez, P., Barrientos, A., del Cerro, J., San Martin., R. (2006)
Mission Planning and Simulation of Unmanned Aerial Vehicles with
a GIS-based Framework; AIAA Guidance, Navigation and Control
Conference and Exhibit. Denver, EEUU, 2006.
6. Heffley R (1988). Minimum complexity helicopter simulation math
model. Nasa center for AerosSpace Information . 88N29819.
7. La Civita M, Messner W, Kanade T. Modeling of Small-Scale
Helicopters with Integrated First-Principles and System-Identification
Techniques. American Helicopter Society 587h Annual Forum,
Montreal, Canada, June 11-13,2002.
8. Mahony-R Lozano-R, Exact Path Tracking (2000) Control for an
Autonomous helicopter in Hover Manoeuvres. Proceedings of the
2000 IEEE International Conference on Robotics & Automation. San
Francisco, CA. APRIL 2000. P1245-1250
9. Mettler B.F., Tischler M.B. and Kanade T.(2002) System Identification
Modeling of a ModelScale Helicopter. T. Journal of the American
Helicopter Society, 2002, 47/1: p. 50-63. (2002).
10. Mettler B, Kanade T, Tischler M. (2003) System Identification Modeling
of a Model-Scale Helicopter. Internal Report CMU-RI-TR-00-03.
CHAPTER
7
Contour Based Path Planning with
B-Spline Trajectory Generation for
Unmanned Aerial Vehicles (UAVs) over
Hostile Terrain
ABSTRACT
This research focuses on trajectory generation algorithms that take into
account the stealthiness of autonomous UAVs; generating stealthy paths
Citation: E. Kan, M. Lim, S. Yeo, J. Ho and Z. Shao, “Contour Based Path Planning
with B-Spline Trajectory Generation for Unmanned Aerial Vehicles (UAVs) over Hos-
tile Terrain,” Journal of Intelligent Learning Systems and Applications, Vol. 3 No. 3,
2011, pp. 122-130. doi: 10.4236/jilsa.2011.33014.
Copyright: © 2011 by authors and Scientific Research Publishing Inc. This work is li-
censed under the Creative Commons Attribution International License (CC BY). http://
creativecommons.org/licenses/by/4.0
142 Unmanned Aerial Vehicles (UAV) and Drones
INTRODUCTION
Unmanned aerial vehicles (UAVs) are increasingly being used in real-world
applications [1-3]. They are typically surveillance and reconnaissance
vehicles operated remotely by a human operator from a ground control
station; they have no on-board guidance capabilities that give them some
level of autonomy, for example, to re-plan a trajectory in the event of a
change in the environment or mission. With such rudimentary capabilities,
only simple tasks can be accomplished and the operation is also limited
to simple or uncomplicated situations, typically in well-characterized
environments. It is useful to endow UAVs with more advanced guidance
capabilities, in particular capabilities that increase the vehicle’s autonomy
to allow for more complex missions or tasks. Currently operating UAVs
use rudimentary guidance technologies, such as following pre-planned or
manually provided waypoints. Over the years, advances in software and
computing technology have fuelled the development of a variety of new
guidance methodologies for UAVs. The availability of various guidance
technologies and understanding of operational scenarios create opportunities
towards refining and implementing such advanced guidance concepts.
The basic difficulties include partially known and changing environment,
extraneous factors, such as threats, evolving mission elements, tight timing
and positioning requirements. Moreover, these must be tackled, while at
the same time, explicitly accounting for the actual vehicle manoeuvring
capabilities, including dynamics and flight-envelope constraints. The term
flight envelope refers to the parameters within which an aerial vehicle may
be safely flown under varying, though expected, wind speed, wing loading,
wind shear, visibility and other flight conditions without resorting to
extreme control measures such as abnormal spin, or stall recovery, or crash
Contour Based Path Planning with B-Spline Trajectory Generation ... 143
landing. These aerial vehicles are usually mobilized to carry out critical
missions in high-risk environments, particularly in situations where it may
be hazardous for human operators. However, such missions demand a high
level of stealth, which has a direct implication on the safety and success of
the mission. Therefore it is important to minimize the risk of detection or
more specifically, the probability of detection of the UAVs by enemy radars.
There has been extensive research in the area of path planning especially
in the artificial intelligence, and optimization with most restricted to two-
dimensional (2D) paths [4,5]. Different conventional approaches have been
developed to solve the path planning problem, such as the cell decomposition,
Djikstra’s algorithm, road map and potential field [6-9]. Sleumer and
Tschichold [6] present an algorithm for the automatic generation of a map
that describes the operation environment of a building consisting of line
segments, representing the walls as an input. The algorithm is based on
the exact cell decomposition approach and generates a connectivity graph
based on cell indices. However, for successful implementation of exact
cell decomposition, it is required that the geometry of each cell be simple.
Moreover, it is important to test the adjacency of two cells to find a path
crossing the portion of the boundary shared by the two cells. Both exact cell
decomposition and approximate cell decomposition methods are accurate,
yet may be computationally expensive when used in a terrain environment
since numerous obstacles at varying elevations could be found. They are only
effective when the environment contains a countable number of obstacles.
Another technique used to effectively represent the alternate paths
generated by a path planner is to use Bsplines. B-splines allow a parametric
curved path to be described by only a few control points rather than the
entire curve segmented which could be as many as thousands of sections,
depending on the length and curvature of the path. Recent methods include
modelling the trajectory using splines based on elastic band theory [10,11],
and interactive manipulation of control points for spline trajectory generation
[12,13]. A series of cubic splines was employed in [14] to connect the straight
line segments in a near-optimal manner, whereas the algorithm presented in
[15] yields extremal trajectories that transition between straight-line path
segments smoothly in a time-optimal fashion.
Previous efforts [16,17] have concentrated on path planning over a
hostile radar zones based on the amount of energy received by enemy radar at
several locations. In this paper, we investigate on the number and placement
of control points within a hostile region by considering the complexity of
144 Unmanned Aerial Vehicles (UAV) and Drones
the terrain. We take into account the path optimality within a hostile region
by considering the complexity of the terrain. We propose an efficient
algorithm to identify the waypoints based on the topographic information
of the enemy terrain. The waypoint generation process is based on the user-
specified threshold flying altitude. The threshold altitude to avoid radar
detection and terrain collisions is determined based on the knowledge of the
enemy terrain. The navigational path between the waypoints is considered
when minimizing the exposure to a radar source. Additionally, the generated
trajectory is of minimal length, subject to the stealthy constraint and satisfies
the aircraft’s dynamic constraints.
Details on the formulation of the problem are presented in this paper.
We model this problem as described in the following section and adopt
a heuristic-based approach to solve it. The rest of this paper is organized
as follows: Section 2 gives a description of the problem model used for
measuring stealth in this work and illustrates the calculation of the radar
risks. Section 3 demonstrates the details of the problem formulation and
Section 4 presents the solution approach of the route planner, substantiated
with simulated results. The conclusion and future work are presented in
Section 5.
MODELLING
In this section, the model we used throughout this work is presented. The
integrated model of the UAV and radar has the following features. The Radar
Cross Section (RCS) of the UAV depends on both the aspect and bank angles
whereas the turn rate of the UAV is determined by its bank angle. Hence, the
RCS and aircraft dynamics are coupled through the aspect and bank angles.
(1)
where x and y are the Cartesian coordinates of the aircraft, ϕ is the
heading angle as shown in Figure 1, v is the constant speed, u is the input
signal and the acceleration is normal to the flight path vector, followed by
U is the maximum allowable lateral acceleration. Figures 2 and 3 show the
dependence of RCS on aspect and bank angles.
Contour Based Path Planning with B-Spline Trajectory Generation ... 145
(2)
(3)
where g is the acceleration of gravity. We model the RCS of the aircraft
as function of the aspect angle λ, the elevation angle φ, and the bank angle
μ, so that
(4)
The RCS obtained from Equation (4) is used as an estimate value in
Equation (6) for measuring stealth in this work. As an example, real aircraft
RCS measurements as functions of aspect and bank angles are shown in
Figures 3 and 4 [18]. The following section illustrates the calculation of the
cost associated to radar risk that will be undertaken by the UAV.
Radar Model
The radar model in this work will be presented in terms of its inputs (aircraft
range and RCS) and output (an estimate of the probability that an aircraft
can be tracked for an interval of time). For the sake of simplicity, we assume
that the radar is located at ( x , y, z) of the navigational space. Let
(5)
it at fixed observation times. To generate a stealthy path for the UAV, we have
to take into account the energy reflected to enemy radar site as the enemy
radar tracks the location, speed, and direction of an aircraft. The range at
which radar can detect an object is related to the power transmitted by the
radar, the fidelity of the radar antenna, the wavelength of the radar signal,
and the RCS of the aircraft [19]. The RCS is the area a target would have
to occupy to produce the amount of reflected power that is detected back at
the radar. RCS is integral to the development of radar stealth technology,
particularly in applications involving aircraft. For example, a stealth aircraft
which is designed to be undetectable will have design features that give it a
low RCS. Low RCS offers advantages in detection range reduction as well
as increasing the effectiveness of decoys against radar-seeking threats.
The size of a target’s image on radar is measured by the RCS and
denoted by the symbol σ and expressed in square meters. The azimuth and
range detected by the radar serve as the inputs to a tracking system, typically
based on one or more Kalman filters.
he purpose of the tracking system is to provide a predicted aircraft
position and velocity so that a decision can be made to launch a missile and
guide it to intercept. RCS modelling and reduction is crucial in designing
stealth weapon systems. We make use of Finite-Difference Time-Domain
(FDTD) Method technique in RCS modelling. The numerical technique is
general, geometry-free, and can be used arbitrarily for any target, within the
limitations of computer memory and speed. Figure 4 shows the geometric
surface for the RCS simulation with the FDTD algorithm.
The FDTD-based package was supplied in [20] for the RCS prediction
of various simple canonical targets. The complete RCS signature of a target
is described by the frequency response of the target illuminated by plane
waves from all possible physical angles. Realistic targets are frequently very
large and are often largely made up of highly conductive materials.
The FDTD-based RCS prediction procedure is as follows:
• The target, located in the computational space, is illuminated by a
Gaussian-pulse plane wave from a given direction;
• The scattered fields are simulated inside the computational
volume until all transients dissipate;
• The time-domain far fields in the spherical coordinates are then
extrapolated by using equivalent current via the near-field to far-
field routine, over a closed, virtual surface surrounding the target.
148 Unmanned Aerial Vehicles (UAV) and Drones
(6)
where S is the signal energy received by the radar, Pavg is the average
power transmitted by the radar, G is the gain of the radar antenna, σ is the
radar cross section of the target, Ae is the effective area of the radar antenna,
tot is the time the radar antenna is pointed at the target, and R is the range to
the target. Every radar has a minimum signal energy that it can detect, Smin.
This minimum signal energy determines the maximum range (Rmax) at which
the radar can detect a given target.
(7)
must be to detect targets at even short ranges. Hence the cost associated
to radar risk is based on a UAV’s exposure to enemy radar. In this work,
the cost of radar risk involves the amount of energy received by enemy
radar at several locations based on the distance between the UAV and the
radar. We assume that the UAV’s radar signature is uniform in all directions
and is proportional to 1/R4 (where R is the distance between the UAV and
the enemy radar); the UAVs are assigned simplified flight dynamics and
performance constraints in two-dimensions; and all tracking radars are given
simplified detection properties. Based on the assumptions stated above, we
classify the navigational space into different levels of risk as described in
the following chapter. The next step is to compute a stealthy path, steering
the UAV clear and around known radar locations. The generated path should
carry minimal radar risks, and adhere to physical and motion constraints.
Also, the distance between the UAV and the radars should be maximized in
order to minimize the energy received at the radar.
PROBLEM FORMULATION
To describe the problem, consider the contour based navigational space
shown in Figure 7. The objective is to find a path which minimizes the
UAV’s exposure to radar sites (small triangles) from the current UAV
position (circle) to the target location (star). In the present study, it is
assumed that the hostile radar zones are known beforehand. The contours
are used to denote elevation and depth on maps. From these contours, a
sense of the general terrain can be determined. The details of the radar zone
can be acquired using satellite data or from surveillance data. The start point
and the end point of the flight are known and it is required to find a feasible
path connecting these two points. The threshold altitude value is determined
by the user which assumes knowledge of the enemy terrain. By flying at
a userspecified threshold altitude value, the aircraft can take advantage of
terrain masking and avoid detection by enemy radars. The aircraft works by
transmitting a radar signal towards the ground area and the signal returns
can then be analysed to see how the terrain ahead varies, which can then be
used by the aircraft’s autopilot to specify the threshold altitude value. The
specified threshold allows the aircraft to fly at reasonably constant height
above the earth so as to avoid the radar detection. In this work, the path
of the UAV is smoothened using cubic Bsplines, which is controlled by
manipulating a number of control points. The generated path should carry
minimal radar risks, and adhere to physical and motion constraints.
150 Unmanned Aerial Vehicles (UAV) and Drones
(8)
where v is the constant speed of the aircraft and l is the path length.
A closed form solution to this problem was obtained using the calculus of
variations, with the limitation that the aircraft traverses an angle with respect
Contour Based Path Planning with B-Spline Trajectory Generation ... 151
to the radar of less than 60˚. Beyond this limit, the optimal path length is
infinite but the cost remains finite. The objective cost function Equation (8),
is augmented to include the rest of the radar at the navigational space, giving
(9)
The cost associated with radar risk involves the amount of energy αi ,
received by enemy radar based on the distance from the UAV to the radar.
We first look for appropriate waypoints that the UAV should traverse in
planning a path which minimizes the radar exposure.
GENERATION OF NODES
The procedure starts by requesting from the user the detection altitude or
maximum elevation desired which assumes knowledge of the enemy terrain.
Nodes below the detection altitude are extracted and used as base in the
generation of straight line segments. The nodes above the detection altitude
are discarded, given that at those elevations the UAVs could be detected.
Appropriate value for the parameter is based on the user’s knowledge of: the
terrain; the operation of the UAVs; the purpose, logistics, and safety of the
mission. This means that the user’s input is a determinant in the quality of
the resulting path. As shown in Figure 8, the initial nodes can be searched by
specifying the detection altitude for the UAV. The next step is to make use
of heuristic search algorithm to compute a stealthy path, steering the UAV
clear and around known radar locations.
IMPLEMENTATION
Generation of Nodes
This search algorithm works by starting at UAV’s present position (red
circle). It adds the heuristic function value and determines the expansion
order of nodes. We make use of a distance-plus-cost heuristic function, f(n)
to determine the order in which the search visits nodes in the space. We
assume that the UAV’s radar signature is uniform in all directions and is
proportional to 1/R4 where R is the aircraft range. We modify the evaluation
function by adding the cost associated with radar risk to the f(n) as follows:
152 Unmanned Aerial Vehicles (UAV) and Drones
(10)
where w is a value in the range [0, 1]; g(n) is the pathcost function which
shows the intensity between the current node and node n; h(n) is the distance
value which is estimated from node n to the goal node; ci is relative to Ri
and m is the number of enemy radars located at the navigational space. The
h(n) plays a significant role in the search process when w value is smaller;
the number of vertices to be explored would be decreased. The choice of w
between 0 and 1 gives the flexibility to place weight on exposure to threats or
fuel expenditure depending on the particular mission scenario. For example
when w = 0, the search process would proceed from the starting location to
the target location by exploring the least number of vertices. However the
generated path is not stealthy and may cause UAV to enter high radar risk
region. A w value closer to 0 would result in the shortest paths, with little
regard for the exposure to enemy radar, while a value closer to 1 would result
in paths that avoid threat exposure at the expense of longer path length. For
this case, the cost weighting parameter w was chosen to give a reasonable
tradeoff between proximity of the path to threats and the path length. The
evaluation function, (10) is tested empirically and compared in terms of
the number of nodes visited. During the test, the cost weighting parameter
w is varied from 0.0 to 1.0 and the test result is demonstrated in Table 1.
The experimentally defined value for the w ranges from [0.1, 0.3] in which
the generated path is optimal and computationally inexpensive, which is in
line with our algorithm objectives. The generated path is neither the safest
possible path nor the shortest possible path, but represents a compromise
between the two objectives.
The algorithm traverses various paths from the starting location to the goal.
At the same time, it compares the risk cost of all the vertices. Starting with
the initial node, it maintains a priority queue of nodes to be traversed, known
as the openQueue (see Listing 1). The lower f(n) for a given node n, the
higher its priority. At each step of the algorithm, the node with the lowest
f(n) value is removed from the queue, the f and h values of its neighbors.
The algorithm traverses various paths from the starting location to the
goal. At the same time, it compares the risk cost of all the vertices. Starting
with the initial node, it maintains a priority queue of nodes to be traversed,
known as the openQueue (see Listing 1). The lower f(n) for a given node n,
the higher its priority. At each step of the algorithm, the node with the lowest
f(n) value is removed from the queue, the f and h values of its neighbors are
updated accordingly, and these neighbors with the least radar cost are added
to the openQueue. The conditions of adding a node to the openQueue are as
follows:
• The node is not in high radar risk region;
• The node is movable and does not exist in the openQueue;
• The distance between the starting point and the current node is
shorter than the earlier estimated distance;
• The distance between the starting point and the current node does
not violate the pre-specified detection altitude.
If the four conditions are satisfied, then the current node will be added
to the openQueue. The algorithm continues until a goal node has a lower f
154 Unmanned Aerial Vehicles (UAV) and Drones
value than any node in the queue. If the actual optimal path is desired, the
algorithm may also update each neighbor with its immediate predecessor in
the best path found so far; this information can then be used to reconstruct the
path by working backwards from the goal node. The output of this stage is a
set of straight line segments connecting a subset of the nodes resulting from
the waypoint generation process, as long as the link does not violate the pre-
specified detection altitude. The path planning algorithm is implemented in
a MATLAB environment and a sample of the result can be seen in Figure 9.
The pseudo-code for the algorithm is as shown in Listing 1.
The next step is to modify the generated path by using a series of cubic
splines to optimize the path by moving away from high radar risks and
improving the navigability of the path for the UAV.
Figure 9. Generated path with least radar risk based on user-specified detection
altitude.
Trajectory Generation
As a first step, the path is parameterized in both x and y directions. An initial
spline knot is fixed at the UAV starting location, while the final spline knot
is fixed at the first edge of the path. We begin by splitting each edge of the
path into three segments, each one described by a different B-spline curve
[23]. In this work, we calculate the radar risk cost at several locations along
each edge and take the length of the edge into account. The radar risk cost
Contour Based Path Planning with B-Spline Trajectory Generation ... 155
was calculated at three points along each edge: Li/6, Li/2, and 5Li/6, where
Li is the length of edge i. The radar risk cost for each edge may be calculated
by
(11)
where N is the number of enemy radars; M is the number of discrete
point and d1/6,i,j is the distance from the 1/6th point on the i th edge to the j
th enemy radar. The goal now is to find the (x, y) locations which minimize
the exposure to the enemy radar for the middle knots. We apply the graph
search algorithm to look for the middle knots with minimal risk cost for each
edge of the path. We determine the interpolating curve based on tangent
vectors and curvature vectors for each pair of points. The interpolation
problem is solved by assuming p = 3, which produces the C2 cubic spline
[24]. The parameters ûk are used to determine the knot vector u as
(12)
Since the number of control points, m + 1, and the number of the knots,
nknot + 1, are related by nknot = m + 4 and, as can be easily deduced from (12),
nknot = n + 6, the unknown variables pj are n + 3. Once the control points pj, j
= 0, , n + 2 are known, and given the knot vector u, the B-spline is completely
defined. Once this step has been completed, the procedure switches to the
algorithm (as shown in Listing 2) to determine how to connect the knots.
Note that some experimentally defined knots are added by the algorithm in
order to smoothen the spline curve.
156 Unmanned Aerial Vehicles (UAV) and Drones
for t ∈[0,1]
get_point (double u, const GLVector & P0,
const GLVector & P1, const GLVector & P2, const
GLVector & P3)
/*return coordinate (x, y) for particular t*/
add_node (const GLVector & node)/*add
small nodes in between the control points and we
make use of 100 small nodes in this work*/
end for
end while
render_spline()/*display generated curve*/
return the spline path
Figure 10. Optimal path based on heuristic search algorithm and user-specified
threshold altitude.
ACKNOWLEDGEMENTS
The authors gratefully acknowledge the funding support from Temasek
Defence Systems Institute, Singapore.
Contour Based Path Planning with B-Spline Trajectory Generation ... 159
REFERENCES
1. A. Agarwal, M. H. Lim, M. J. Er and T. N. Nguyen, “Rectilinear
Workspace Partitioning for Parallel Coverage Using Multiple UAVs,”
Advanced Robotics, Vol. 21, No. 1, 2007, pp. 105-120.
2. K. K. Lim, Y. S. Ong, M. H. Lim and A. Agarwal, “Hybrid Ant Colony
Algorithm for Path Planning in Sparse Graphs,” Soft Computing
Journal, Vol. 12, No. 10, 2008, pp. 981-994.
3. C. W. Yeu, M. H. Lim, G. Huang, A. Agarwal and Y. S. Ong, “A
New Machine Learning Paradigm for Terrain Reconstruction,” IEEE
Geoscience and Remote Sensing Letters, Vol. 3, No. 3, 2006, pp. 981-
994. doi:10.1109/LGRS.2006.873687
4. L. Davis, “Warp Speed: Path Planning for Star Trek: Armada,” AAAI
Spring Symposium, AAAI Press, Menlo Park, 2000.
5. E. Frazzoli, M. Dahleh and E. Feron, “Real-Time Motion Planning
for Agile Autonomous Vehicles,” Journal of Guidance, Control and
Dynamics, Vol. 25, No. 1, 2002, pp. 116-129. doi:10.2514/2.4856
6. N. H. Sleumer and N. Tschichold-Gürman, “Exact Cell Decomposition
of Arrangements Used for Path Planning in Robotics,” Technical
Reports 329, ETH Zürich, Institute of Theoretical Computer Science,
1999.
7. J. C. Latombe, “Robot Motion Planning,” Kulwer Academic Publishers,
Boston, 1991.
8. T. Lozano-Pyrez and M. A. Wesley, “An Algorithm for Planning
Collision-Free Paths among Polyhedral Obstacles,” Communications
of the ACM, Vo1. 22, No. 10, 1979, pp. 565-570.
9. M. Jun, “Path Planning for Unmanned Aerial Vehicles in Uncertain
and Adversarial Environments,” In: S. Butenko, R. Murphey and
P. Pardalos, Eds., Cooperative Control: Models, Applicarions and
Algorithms, Kluwer, 2003, pp. 95-111.
10. J. Hilgert, K. Hirsch, T. Bertram and M. Hiller, “Emergency Path
Planning for AutonomousVehicles Using Elastic Band Theory,”
Advanced Intelligent Mechatronics, Vol. 2, 2003, pp. 1390-1395.
11. T. Sattel and T. Brandt, “Ground Vehicle Guidance Along Collision-
Free Trajectories Using Elastic Bands,” Proceedings of 2005 American
160 Unmanned Aerial Vehicles (UAV) and Drones
8
Trajectory Tracking of Quadrotor Aerial
Robot Using Improved Dynamic
Inversion Method
ABSTRACT
This paper presents trajectory tracking control works concerning quadrotor
aerial robot with rigid cross structure. The quadrotor consists of four
propellers which are two paired clockwise rotate and anticlockwise rotate.
A nonlinear dynamic model of the quadrotor is provided, and a controller
based on the improved dynamic inverse is synthesized for the purpose of
stabilization and trajectory tracking. The proposed control strategy has been
tested in simulation that can balance the deviation of model inaccuracy well.
INTRODUCTION
Since the beginning of 20th century, many research efforts have been done to
create effective flying machines with improved performance and capabilities.
The quadrotor aerial robot is emerging as a kind of popular platform for
unmanned aerial vehicle (UAV) research due to its simple structure,
hovering ability and vertical take-off and landing (VTOL) capability. It is
especially suitable for reconnaissance and surveillance missions in complex
and tough environment. Because of quadrotor’s complexity of dynamic and
uncertainty of parameters, it is usually chosen as a testbed for validating the
advanced control algorithm of nonlinear control.
The quadrotor is described as aero robot in [1,2]. The core problems
of quadrotor are attitude control and trajectory tracking control. The flight
control unit and its control programming code are priority to its structure
when treated as a testbed for control algorithm. With the emerging of open
code project for autonomous control, it’s more convenient to transplant the
advanced control algorithms from modeling and simulation to validating
test, which also provide great help to research enthusiast or aerial robot
hobbyist. In [3], the differential flat control and nonlinear inverse control
are proposed to process the nonlinear trajectory control for differential
flatness property system. In [4], an real-time nonlinear optimal control,
which is called technique, is employed to solve the resultant
challenging problem considering the full nonlinear dynamics without
gaining scheduling techniques and timescale separations, which leads to a
closedform suboptimal control law. In [5,6], the back-stepping procedure is
synthesized for the purposes of stabilization and trajectory tracking, which
is generally robustness applied to the system of parameters uncertainty and
unmodeled dynamic to some extent. The various types of aerial robot UAV
based on quadrotor concept are put forward in [5-8], using the superiority of
fix-wing plant and rotor-wing helicopter to enhance the VTOL ability and
flying range. So, in general, the dynamic inverse method is more suitable
for various types of aerial robots due to its start from original model. In
[9], the dynamic inverse control with zero dynamic for non-linear affine
model system is proposed. In [10], the hierarchical flight control is synthesis
for rotorcraft-based UAV. In present paper, the hierarchical structure of
improved dynamic inverse method for quadrotor trajectory tracking is
Trajectory Tracking of Quadrotor Aerial Robot Using Improved ... 165
QUADROTOR MECHANISM
The typical quadrotor mechanism and coordinate system is briefly showed
in Figure 1, where four rotors are cross structure, providing the three
required moments (roll, pith and yaw) and lifting force. For describing the
flight condition of quadrotor exactly, we can respectively define flat-earth
frame E and rigid body frame B according to right-hand rule.
The aerodynamic forces and moments of propeller depend on various
parameters, such as the angle of attack, aerofoil and inflow ratio. We
assumed that the aerodynamic forces Fi and moments Ti of four propellers
are proportional to the square of angular velocity . Which is given by:
(1)
where i =1, 2,3, 4 . The , kt, kf is respectively aerodynamic force and anti-
torque coefficient. Since the forces and moments are derived by modifying
the speed of each rotor, then the lifting force and three moments can be
given as follows:
Lifting force (the sum of four rotor forces):
(2)
Roll torque (right-hand rule):
(3)
Pitch torque (right –hand rule):
166 Unmanned Aerial Vehicles (UAV) and Drones
(4)
Yaw torque (right –hand rule):
(5)
where L is the distance from motor position to the center of quadrotor
mass. Rewrite the matrix form of Equations (2)-(5) as follows:
(6)
(7)
(8)
and the cross product operator × is defined as :
(9)
The external propeller forces Fb acted on the rotors is defined as:
(10)
The weight forces of mass center is as follows:
(11)
where s and c are shorthand form of cosine and sine.
The elastic force of landing gear on the ground is modeling the
characteristics of taking off situation for quadrotor, so it disappears while
flying or leaving the ground. It is modeling as follows:
168 Unmanned Aerial Vehicles (UAV) and Drones
(12)
where kl and zl is elastic coefficient and elastic deformation on ground.
And the function denotes:
(13)
Then the Equation (7) can be expanded as follows:
(14)
(15)
Navigation Equations
To obtain the translational and rotational motions from the body frame to
the flat earth coordinate system, the well-known navigation equations are
described as follows:
(16)
earth coordinate, the vector stand for the attitude angles of body
frame in the flat earth coordinate. The matrix R and S is translational and rotational
transform matrix. The Equation (16) can be extended as follows:
Trajectory Tracking of Quadrotor Aerial Robot Using Improved ... 169
(17)
(18)
where s, c and t stand for sine, cosine and tangent, respectively.
Motor Dynamics
The revolving speed of propeller driven by the brushless DC-motor is
proportional to the input voltage which is regulated by speed control unit.
Input voltage signals come from the flight control system in the light of
flight task. Assumed that the load anti-torque is proportional to the square
of revolving speed and the inductance of brushless DC-motor is omitted, the
dynamic model of DC-motor is described in following formula:
(19)
where the coefficient, is constant in dynamic model, is the
regulated signal pwm from the flight control unit and the ith is the index of DC-motor.
(20)
Trajectory control is considered to exponentially stabilize the waypoint
position error. Then the desired acceleration command would satisfy the
relationship as follows:
(21)
Trajectory Tracking of Quadrotor Aerial Robot Using Improved ... 171
(22)
(23)
where the matrix inverse stands for symbol inv2. We can acquire
by expected attitude angle command .
Equation (15) can be transformed as follows:
(24)
172 Unmanned Aerial Vehicles (UAV) and Drones
(25)
(26)
where the square root stands for symbol inv3.
SIMULATION RESULTS
In this section, the demonstrating results are presented in order to observe
the performances of proposed control principle. We considered a case of a
rectangle shape trajectory-tracking problem. The parameters used for the
aircraft model and the controller are given in Table 1. These parameters are
referred to the quadrotor prototype designed by our lab, just refer Figure 3.
Trajectory Tracking of Quadrotor Aerial Robot Using Improved ... 173
CONCLUSION
The analysis presented here has shown the hover and
174 Unmanned Aerial Vehicles (UAV) and Drones
REFERENCES
1. T. Tomic, K. Schmid, P. Lutz, et al., “Towards a Fully Autonomous
UAV,” IEEE Robotics & Automation Magazine, Vol. 19, No. 3, 2012,
pp. 46-56.
2. H. Lim, J. Park, D. Lee and H. J. Kim, “Build Your Own Quadrotor:
Open-Source Projects on Unmanned Aerial Vehicles,” IEEE Robotics
& Automation Magazine, Vol. 19, No. 3, 2012, pp. 33-45.
3. A. Drouin, S. Simoes-Cunha, and A. C. Brandao-Ramos, “Differential
Flatness and Control of Nonlinear Systems,” Proceedings of the 30th
Chinese Control Conference, 22-24 July 2011, Yantai, pp. 643-648.
4. M. Xin, Y. J. Xu and R. Hopkins, “Trajectory Control of Miniature
Helicopters Using a Unified Nonlinear Optimal Control Technique,”
Journal of Dynamic Systems, Measurement and Control, Vol. 133, No.
6, 2011, 14 p.
5. K. T. Oner, E. Cetinsoy, M. Unel, et al., “Dynamic Model and Control of a
New Quadrotor Unmanned Aerial vehicle with Tilt-Wing Mechanism,”
World Academy of Science, Engineering and Technology, 2008.
6. Z. Fang and W. N. Gao, “Adaptive Integral Backstepping Control of
a Micro-Quadrotor,” The 2nd International Conference on Intelligent
Control and Information Processing, Harbin, 25-28 July 2011, pp. 910-
915.
7. I. F. Kendoul and R. Lozano, “Modeling and Control of a Small
Autonomous Aircraft Having Two Tilting Rotors,” IEEE Transactions
on Robotics, Vol. 22, No. 6, 2006, pp. 1297-1302.
8. K. T. Oner, E. Cetinsoy, E. Sirimoglu, et al., “Mathematical Modeling
and Vertical Flight Control of a Tilt-Wing UAV,” Turkish Journal of
Electrical Engineering & Computer Sciences, Vol. 20, No. 1, 2012, p.
149.
9. A. Das, K. Subbarao and F. Lewis, “Dynamic Inversion with Zero-
Dynamics Stabilisation for Quadrotor Control,” IET Control Theory
Applications, Vol. 3, No. 3, 2009, pp. 303-314. http://dx.doi.
org/10.1049/iet-cta:20080002
10. H. Shim, “Hierarchical Flight Control System Synthesis for Rotorcraft-
Based Unmanned Aerial Vehicles,” Doctor of Philosophy Dissertation,
University Of California, Berkeley, 2000.
SECTION 3
QUADROTORS AND
SMALL-SIZE AERIAL
VEHICLES
CHAPTER
9
Neural Network Control and Wireless
Sensor Network-based Localization of
Quadrotor UAV Formations
INTRODUCTION
In recent years, quadrotor helicopters have become a popular unmanned
aerial vehicle (UAV) platform, and their control has been undertaken by
many researchers (Dierks & Jagannathan, 2008). However, a team of UAV’s
working together is often more effective than a single UAV in scenarios
like surveillance, search and rescue, and perimeter security. Therefore, the
formation control of UAV’s has been proposed in the literature.
Citation: Travis Dierks and S. Jagannathan (January 1st 2009). “Neural Network Con-
trol and Wireless Sensor Network-based Localization of Quadrotor UAV Formations”,
Aerial Vehicles, Thanh Mung Lam, IntechOpen, DOI: 10.5772/6477.
Copyright: © 2009 The Author(s). Licensee IntechOpen. This chapter is distributed
under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0
License
180 Unmanned Aerial Vehicles (UAV) and Drones
BACKGROUND
(1)
where
and m is a positive scalar that represents the total mass of the UAV, J ∈ℜ3x3
represents the positive definite inertia matrix,
represents the translational velocity, represents
the angular velocity, , are the nonlinear aerodynamic
effects, provides the thrust along the z-direction, provides
the rotational torques, and , represents
unknown, but bounded disturbances such that for all time t , with τ
M
being a known positive constant, is an nxn identity matrix, and
represents an mxl matrix of all zeros. Furthermore,
represents the gravity vector defined as where
is a unit vector in the inertial coordinate frame, , and
is the general form of a skew symmetric matrix defined as in (Dierks &
Jagannathan, 2008). It is important to highlight for any vector
, and this property is commonly referred to as the skew symmetric
property (Lewis et al., 1999).
182 Unmanned Aerial Vehicles (UAV) and Drones
(2)
where the abbreviations have been used for sin(•) and cos(•), re-
spectively. It is important to note that
. It is also necessary to define a rotational transformation matrix from the
fixed body to the inertial coordinate frame as (Dierks & Jagannathan, 2008)
(3)
where the abbreviation t (•) has been used for tan(•). The transformation
matrices R and T are nonsingular as long as
and −π ≤ψ ≤ π . These regions will be assumed throughout the development of
this work, and will be referred to as the stable operation regions of the UAV.
Under these flight conditions, it is observed that
for known constants Rmax andTmax (Neff et al., 2007). Finally, the kinematics
of the UAV can be written as
(4)
Neural Networks
In this work, two-layer NN’s are considered consisting of one layer of
randomly assigned constant weights in the first layer and one layer
of tunable weights in the second with a inputs, b outputs, and L
hidden neurons. A compromise is made here between tuning the number of
layered weights with computational complexity. The universal approximation
property for NN’s (Lewis et al., 1999) states that for any smooth function
, there exists a NN such that where εN
is the bounded NN functional approximation error such that ,for
Neural Network Control and Wireless Sensor Network-based ... 183
(5)
where .
(6)
where
(7)
Thus, to solve the leader-follower formation control problem in the
proposed framework, a control velocity must be derived to ensure
(8)
Throughout the development, will be taken as constants,
while the constant total mass,mj , is assumed to be known. Additionally,
it will be assumed that reliable communication between the leader and its
followers is available, and the leader communicates its measured orientation,
Θi , and its desired states, . This is a far less stringent
assumption than assuming the leader communicates all of its measured
states to its followers (Gu et al., 2006). Additionally, future work will relax
this assumption. In the following section, contributions from single UAV
control will be considered and extended to the leader-follower formation
control of UAV’s.
2008). The velocity vjzb is directly controllable with the thrust input.
However, in order to control the translational velocities vjxb and vjyb , the
pitch and roll must be controlled, respectively, thus redirecting the thrust.
With these objectives in mind, the frameworks for single UAV control are
extended to UAV formation control as follows.
(9)
where Rajd is defined as in (5) and written in terms of ψ jd , and Ξ jid is
written in terms of the desired angle of incidence and bearing, α jid β jid ,
,respectively, similarly to (7). Next, using (6) and (9), define the position
tracking error as
(10)
which can be measured using local sensor information. To form
the position tracking error dynamics, it is convenient to rewrite (10) as
revealing
(11)
Next, select the desired translational velocity of follower j to stabilize (11)
(12)
where is a diagonal positive definite
design matrix of positive design constants and v id is the desired translational
velocity of leader i. Next, the translational velocity tracking error system is
defined as
(13)
Applying (12) to (11) while observing and similarly
, reveals the closed loop position error dynamics to be
rewritten as
186 Unmanned Aerial Vehicles (UAV) and Drones
(14)
Next, the translational velocity tracking error dynamics are developed.
Differentiating (13), observing
(15)
Next, we rewrite (2) in terms of the scaled desired orientation vector,
where , and
and are the maximum desired roll and
pitch, respectively, define , and add and subtract
with yield
(16)
where and
(17)
is an unknown function which can be rewritten as
. In the forthcoming development, the
approximation properties of NN will be utilized to estimate the unknown
function by bounded ideal weights , such that
for an unknown constant WMc1 , and written as where
is the bounded NN approximation error where ε Mc1 is a known
constant. The NN estimate of jc1 f is written as
where is the NN estimate
of T Wjc1 , , 3,2,1 ˆW 1 i = T ijc is the ith row of is the NN
input defined as
Neural Network Control and Wireless Sensor Network-based ... 187
Note that is an estimate of x jc1 since the follower does not know ωi .
However, Θi is directly related toωi ; therefore, it is included instead.
Remark 1: In the development of (16), the scaled desired orientation
vector was utilized as a design tool to specify the desired pitch and roll
angles. If the un-scaled desired orientation vector was used instead, the
maximum desired pitch and roll would remain within the stable operating
regions. However, it is desirable to saturate the desired pitch and roll before
they reach the boundaries of the stable operating region.
Next, the virtual control inputs θ jd andφ jd are identified to control
the translational velocities v jxb and v jyb , respectively. The key step in the
development is identifying the desired closed loop velocity tracking error
dynamics. For convenience, the desired translational velocity closed loop
system is selected as
(18)
where is a diagonal positive
definite design matrix with each . In the
following development, it will be shown that
; therefore, it is clear that Kv > 0 . Then, equating (16) and (18) while
considering only the first two velocity error states reveals
(19)
where was utilized. Then, applying basic math
operations, the first line of (19) can be solved for the desired pitch θ jd while
the second line reveals the desired roll φ jd . Using the NN estimates, , The
desired pitch θ jd can be written as
(20)
where and
. Similarly, the desired roll angle, φ jd , is found to be
(21)
188 Unmanned Aerial Vehicles (UAV) and Drones
where and
.
Remark 2: The expressions for the desired pitch and roll in (20) and
(21) lend themselves very well to the control of a quadrotor UAV. The
expressions will always produce desired values in the stable operation
regions of the UAV. It is observed that a tan(•) approaches ± π / 2 as its
argument increases. Thus, introducing the scaling factors in
results in , and the aggressiveness of
the UAV’s maneuvers can be managed.
Now that the desired orientation has been found, next define the attitude
tracking error as
(22)
where the dynamics are found using (4) to be . In order
to drive the orientation errors (22) to zero, the desired angular velocity, ω jd
, is selected as
(23)
(24)
and observing , the closed loop orientation tracking error
system can be written as
(25)
Examining (23), calculation of the desired angular velocity requires
knowledge of ; however, is not known in view of the fact
are not available. Further, development of uj2 in the following section will
reveal is required which in turn implies must be known. Since
these requirements are not practical, the universal approximation property
of NN is invoked to estimate (Dierks and Jagannathan, 2008).
Neural Network Control and Wireless Sensor Network-based ... 189
(26)
For convenience, we define a change of variable as ,
and the dynamics (26) become
(28)
Defining the estimates of , respectively,
and the estimation error , the dynamics of the proposed NN
virtual control inputs become
(28)
where K jΩ1
and K jΩ2
are positive constants. The estimate is then
written as
(29)
where K jΩ3 is a positive constant.
In (28), universal approximation property of NN has been utilized
to estimate the unknown function by bounded ideal weights
, such that for a known constant WMΩ , and written as
where ε jΩ is the bounded NN approximation
error such that for a known constant ε ΩM . The NN estimate of
f jΩ is written as where is the NN estimate of
is the NN input written in terms of the virtual control estimates,
desired trajectory, and the UAV velocity. The NN input is chosen to take the
form of .
190 Unmanned Aerial Vehicles (UAV) and Drones
(30)
where , and
. Furthermore, with a computable
constant with NΩ the constant number of hidden layer neurons in the virtual
control NN. Similarly, the estimation error dynamics of (29) are found to be
(31)
where . Examination of (30) and (31)
reveals , and to be equilibrium points of the estimation error
dynamics when .
To this point, the desired translational velocity for follower j has been
identified to ensure the leader-follower objective (8) is achieved. Then, the
desired pitch and roll were derived to drive ,
respectively. Then, the desired angular velocity was found to ensure
. What remains is to identify the UAV thrust to guarantee and
rotational torque vector to ensure . First, the thrust is derived.
Consider again the translational velocity tracking error dynamics (16),
as well as the desired velocity tracking error dynamics (18). Equating (16)
and (18) and manipulating the third error state, the required thrust is found
to be
(32)
(33)
with
, for a known constant
, and for a computable constant
.
Next, the rotational torque vector, , will be addressed. First, multiply
the angular velocity tracking error (24) by the inertial matrix J j , take the first
derivative with respect to time, and substitute the UAV dynamics (1) to reveal
(34)
with . Examining
, it is clear that the function is nonlinear and contains unknown
terms; therefore, the universal approximation property of NN is utilized
to estimate the function by bounded ideal weights
(35)
and substituting the control input (35) into the angular velocity dynamics
(34) as well as adding and subtracting , the closed loop dynamics
become
(36)
where , and
(37)
(38)
where are
constant design parameters. Then there exists positive design
constants and positive definite design matrices
, such that the virtual controller estimation errors
(39)
where
where and
provided and . Observing
as
(40)
(41)
(42)
Finally, (42) is less than zero provided
(43)
and the following inequalities hold:
Neural Network Control and Wireless Sensor Network-based ... 195
(44)
Therefore, it can be concluded using standard extensions of Lyapunov
theory (Lewis et al., 1999) that is less than zero outside of a compact
set, revealing the virtual controller estimation errors, , and the NN
weight estimation errors, , the position, orientation, and translational and
angular velocity tracking errors, , respectively, and
the dynamic controller NN weight estimation errors, , are all SGUUB.
(45)
The closed loop position tracking error then takes the form of
(46)
Then, using steps similar to (15)-(21), the desired pitch and roll angles
are given by
(47)
where and
196 Unmanned Aerial Vehicles (UAV) and Drones
(48)
where
and and
(49)
(50)
(51)
while the NN weights for the translational velocity error system are
augmented as
Now, using the augmented variables above, the augmented closed loop
position and translational velocity error dynamics for the entire formation
are written as
(52)
(53)
terms of , , GF is a
constant matrix relating to the formation interconnection errors defined as
(54)
the UAV directly in front of it, follower 1 tracks leader i, follower 2 tracks
follower 1, etc., and FT becomes the identity matrix.
In a similar manner, we define augmented error systems for the virtual
controller, orientation, and angular velocity tracking systems as
(55)
respectively. It is straight forward to verify that the error dynamics of
the augmented variables (55) takes the form of (30), (31), (25), and (36),
respectively, but written in terms of the augmented variables (55).
Theorem 3.3.1: (UAV Formation Stability) Given the leader-follower
criterion of (8) with 1 leader and N followers, let the hypotheses of Theorem
3.1.1 hold. Let the virtual control system for the leader i be defined similarly
to (28) and (29) with the virtual control NN update law defined similarly to
(37). Let control velocity and desire pitch and roll for the leader be given
by (45), (47), and (48), respectively, along with the thrust and rotation
torque vector defined by (49) and (50), respectively, and let the control NN
update law be defined identically to (38). Then, the position, orientation, and
velocity tracking errors, the virtual control estimation errors, and the NN
weights for each NN for the entire formation are all SGUUB.
Proof: Consider the following positive definite Lyapunov candidate
(56)
where
and
(57)
Neural Network Control and Wireless Sensor Network-based ... 199
(58)
of , NΩ is the number of
hidden layer neurons in the augmented virtual control system, and η Ω
is a computable constant based on . Similarly,
are the minimum singular values
of the augmented gain matrices respectively, where
are a known computable constants
and ηc is a computable constant dependent on . Now,
using (57) and (58), an upper bound for is found to be
(59)
Finally, (59) is less than zero provided
(60)
and the following inequalities hold:
(61)
Therefore, it can be concluded using standard extensions of Lyapunov
theory (Lewis et al., 1999) that V$ is less than zero outside of a compact set,
revealing the position, orientation, and velocity tracking errors, the virtual
200 Unmanned Aerial Vehicles (UAV) and Drones
control estimation errors, and the NN weights for each NN for the entire
formation are all SGUUB.
Remark 3: The conclusions of Theorem 3.3.1 are independent of any
specific formation topology, and the Lyapunov candidate (56) represents
the most general form required show the stability of the entire formation.
Examining (60) and (61), the minimum controller gains and error bounds are
observed to increase with the number of follower UAV’s, N. These results
are not surprising since increasing number of UAV’s increases the sources
of errors which can be propagated throughout the formation.
Remark 4: Once a specific formation as been decided and the form of
FT is set, the results of Theorem 3.3.1 can be reformulated more precisely.
For this case, the stability of the formation is proven using the sum of the
individual Lyapunov candidates of each UAV as opposed to using the
augmented error systems (51)-(55).
the relative positions of each vehicle with respect to the formation reference
frame (leader), and H is the control policy represented as a graph where
nodes represent UAV and edges represent the control assignments. Next, we
describe the OEDSR routing protocol where each UAV will be referred to
as a “node.”
In OEDSR, sub-networks are formed around a group of nodes due to an
activity, and nodes wake up in the sub-networks while the nodes elsewhere
in the network are in sleep mode. An appropriate percentage of nodes in the
sub-network are elected as cluster heads (CHs) based on a metric composed
of available energy and relative location to an event (Jagannathan, 2007) in
each sub-network. Once the CHs are identified and the nodes are clustered
relative to the distance from the CHs, the routing towards the formation leader
(FL) is initiated. First, the CH checks if the FL is within the communication
range. In such case, the data is sent directly to the FL. Otherwise, the data
from the CHs in the sub-network are sent over a multi-hop route to the
FL. The proposed routing algorithm is fully distributed since it requires
only local information for constructing routes, and is proactive adapting to
changes in the network. The FL is assumed to have sufficient power supply,
allowing a high power beacon from the FL to be sent such that all the nodes
in the network have knowledge of the distance to the FL. It is assumed that
all UAV’s in the network can calculate or measure the relative distance to
the FL at any time instant using the formation graph information or local
sensory data. Though the OEDSR protocol borrows the idea of an energy-
delay metric from OEDR (Jagannathan, 2007), selection of relay nodes
(RN) does not maximize the number of two hop neighbors. Here, any UAV
can be selected as a RN, and the selection of a relay node is set to maximize
the link cost factor which includes distance from the FL to the RN.
(62)
Neural Network Control and Wireless Sensor Network-based ... 203
In equation (62), checking the remaining energy at the next hop node increases
network lifetime; the distance to the FL from the next hop node reduces the
number of hops and endto-end delay; and the delay incurred to reach the
next hop node minimizes any channel problems. When multiple RNs are
available for routing of the information, the optimal RN is selected based on
the highest LCF. These clearly show that the proposed OEDSR protocol is
an on demand routing protocol. For detailed discussion of OEDSR refer to
(Jagannathan, 2007). The route selection process is illustrated through the
following example. This represents sensor data collected by a follower UAV
for the task at hand that is to be transmitted to the FL.
7. The link cost factors for n9, n10, and n12 are calculated.
8. The node with the maximum value of LCF is selected as the RN
and assigned to Relay(n). In this case, Relay(n)={n12}.
9. Now UAV n12 checks if it is in direct range with the FL, and if it
is, then it directly routes the information to the FL.
10. Otherwise, n12 is assigned as the RN, and all the nodes that are in
range with node n12 and whose distance to the FL is less than its
distance to the FL are taken into consideration. Therefore, UAV’s
n13, n16, n19, and n17 are taken into consideration.
11. The LCF is calculated for n13, n16, n19, n14, and n17. The node
with the maximum LCF is selected as the next RN. In this case
Relay(n) = {n19}.
12. Next the RN n19 checks if it is in range with the FL. If it is, then
it directly routes the information to the FL. In this case, n19 is in
direct range, so the information is sent to the FL directly.
OEDSR link cost factor, n4 is selected as the RN for the first hop. Next, n4
sends signals to all the nodes it has in range, and selects a node as RN using
the link cost factor. The same procedure is carried on till the data is sent
to the FL. Lemma 4.3.2: The intermediate UAV’s on the optimal path are
selected as RNs by the previous nodes on the path. Proof: A UAV is selected
as a RN only if it has the highest link cost factor and is in range with the
previous node on the path. Since OEDSR maximizes the link cost factor,
intermediate nodes that satisfy the metric on the optimal path are selected
as RNs.
Lemma 4.3.3: A UAV can correctly compute the optimal path (with lower
end to end delay and maximum available energy) for the entire network
topology.
Proof: When selecting the candidate RNs to the CHs, it is ensured that
the distance form the candidate RN to the FL is less than the distance from
the CH to the FL. When calculating the link cost factor, available energy is
divided by distance and average end-to-end delay to ensure that the selected
nodes are in range with the CHs and close to the FL. This helps minimize
the number of multi-point RNs in the network.
Theorem 4.3.4: OEDSR protocol results in an optimal route (the path
with the maximum energy, minimum average end-to-end delay and minimum
distance from the FL) between the CHs and any source destination.
SIMULATION RESULTS
A wedge formation of three identical quadrotor UAV’s is now considered in
MATLAB with the formation leader located at the apex of the wedge. In the
simulation, follower 1 should track the leader at a desired separation sjid = 2
m , desired angle of incidence (0 rad) α jid = , and desired bearing (3 rad) β jid
=π while follower 2 tracks the leader at a desired separation sjid = 2 m desired
206 Unmanned Aerial Vehicles (UAV) and Drones
angle of incidence, (10 rad) α jid −= π , and desired bearing (3 rad) β jid −= π .
The desired yaw angle for the leader and thus the formation is selected to be
sin( 3.0 t) ψ d = π . The inertial parameters of the UAV’s are taken to be as m
= 9.0 kg and 2 J = diag {63.0,42.0,32.0} kg m2 , and aerodynamic friction is
modeled as in (Dierks and Jagannathan, 2008). The parameters outlined in
Section 3.3 are communicated from the leader to its followers using single
hop communication whereas results for OEDSR in a mobile environment
can be found in (Jagannathan, 2007). Each NN employs 5 hidden layer
neurons, and for the leader and each follower, the control gains are selected
to be, KΩ1 = ,23 KΩ2 = ,80 KΩ3 =20, K = diag {30,10,10} ρ , kv1 = ,10 kv2 =
,10 kv3 = 30, K = diag {30,30,30} Θ , and K = diag {25,25,25} ω . The NN
parameters are selected as, FΩ = ,10 κ Ω =1, and Fc = ,10 κc = 1.0 , and the
maximum desired pitch and roll values are both selected as 2π 5/ .
Figure 4 displays the quadrotor UAV formation trajectories while
Figures 5-7 show the kinematic and dynamic tracking errors for the leader
and its followers. Examining the trajectories in Figure 4, it is important to
recall that the bearing angle, βji , is measured in the inertial reference frame
of the follower rotated about its yaw angle. Examining the tracking errors
for the leader and its followers in Figures 5-7, it is clear that all states track
their desired values with small bounded errors as the results of Theorem
3.3.1 suggest. Initially, errors are observed in each state for each UAV, but
these errors quickly vanish as the virtual control NN and the NN in the
actual control law learns the nonlinear UAV dynamics. Additionally, the
tracking performance of the underactuated states xv and yv implies that the
desired pitch and roll, respectively, as well as the desired angular velocities
generated by the virtual control system are satisfactory.
CONCLUSIONS
A new framework for quadrotor UAV leader-follower formation control was
presented along with a novel NN formation control law which allows each
follower to track its leader without the knowledge of dynamics. All six DOF
are successfully tracked using only four control inputs while in the presence
of unmodeled dynamics and bounded disturbances. Additionally, a discovery
and localization scheme based on graph theory and ad hoc networks was
presented which guarantees optimal use of the UAV’s communication
links. Lyapunov analysis guarantees SGUUB of the entire formation, and
numerical results confirm the theoretical conjectures.
208 Unmanned Aerial Vehicles (UAV) and Drones
REFERENCES
1. Das, A.; Spletzer, J.; Kumar, V.; & Taylor, C. (2002). Ad hoc networks for
localization and control of mobile robots, Proceedings of IEEE Conference
on Decision and Control, pp. 2978-2983, Philadelphia, PA, December 2002
2. Desai, J.; Ostrowski, J. P.; & Kumar, V. (1998). Controlling formations of
multiple mobile robots, Proceedings of IEEE International Conference on
Robotics and Automation, pp. 2964-2869, Leuven, Belgium, May 1998
3. Dierks, T. & Jagannathan, S. (2008). Neural network output feedback
control of a quadrotor UAV, Proceedings of IEEE Conference on
Decision and Control, To Appear, Cancun Mexico, December 2008
4. Fierro, R.; Belta, C.; Desai, J.; & Kumar, V. (2001). On controlling
aircraft formations, Proceedings of IEEE Conference on Decision and
Control, pp. 1065-1079, Orlando, Florida, December 2001
5. Galzi, D. & Shtessel, Y. (2006). UAV formations control using
high order sliding modes, Proceedings of IEEE American Control
Conference, pp. 4249-4254, Minneapolis, Minnesota, June 2006
6. Gu, Y.; Seanor, B.; Campa, G.; Napolitano M.; Rowe, L.; Gururajan,
S.; & Wan, S. (2006). Design and flight testing evaluation of formation
control laws. IEEE Transactions on Control Systems Technology, Vol.
14, No. 6, (November 2006) page numbers (1105- 1112)
7. Jagannathan, S. (2007). Wireless Ad Hoc and Sensor Networks, Taylor
& Francis, ISBN 0-8247- 2675-8, Boca Raton, FL
8. Lewis, F.L.; Jagannathan, S.; & Yesilderek, A. (1999). Neural Network
Control of Robot Manipulators and Nonlinear Systems, Taylor &
Francis, ISBN 0-7484-0596-8, Philadelphia, PA
9. Neff, A.E.; DongBin, L.; Chitrakaran, V.K.; Dawson, D.M.; & Burg,
T.C. (2007). Velocity control for a quad-rotor uav fly-by-camera
interface, Proceedings of the IEEE Southeastern Conference, pp. 273-
278, Richmond, VA, March 2007
10. Saffarian, M. & Fahimi, F. (2008). Control of helicopters’ formation using non-
iterative nonlinear model predictive approach, Proceedings of IEEE American
Control Conference, pp. 3707-3712, Seattle, Washington, June 2008
11. Xie, F.; Zhang, X.; Fierro, R; & Motter, M. (2005). Autopilot based
nonlinear UAV formation controller with extremum-seeking,
Proceedings of IEEE Conference on Decision and Control, pp 4933-
4938, Seville, Spain, December 2005
CHAPTER
10
Dubins Waypoint, Navigation of
Small-Class Unmanned Aerial Vehicles
ABSTRACT
This paper considers a variation on the Dubins path problem and proposes
an improved waypoint navigation (WN) algorithm called Dubins waypoint
navigation (DWN). Based on the Dubins path problem, an algorithm is
developed that is updated in real-time with a horizon of three waypoints. The
purpose of DWN is to overcome a problem that we find in existing WN for
INTRODUCTION
The user community of unmanned aerial vehicle (UAV) systems has been
growing significantly; the commercial market net worth reached a reported
$8.3 billion in 2018 [1] , the largest growth in the commercial markets
being in small-class (under 55 pounds) vehicles. This paper develops an
algorithm for a subset of this class of UAV, in particular, for fixed-wing
vehicles. Users of small-class UAV are presently navigating autonomously
by open-source algorithms such as Mission Planner, Cape, and Pix4D. These
navigational systems employ waypoint navigation (WN), wherein the user
enters waypoints, whether a priori (static) or not (dynamic). The practice of
WN in this community is a constraint to which development work adheres.
The community is growing, and with that, is expected to demand higher
accuracy. In RC flight, the racing community already requires reaching
targets accurately and the growth in autonomous flight with its varied
missions will create more demand to reach waypoints accurately.
The shortest path between two waypoints with a limited turning radius
is called a Dubins path [2] . WN employing Dubins paths has been studied
considerably. Ariff and Go conducted a thorough examination of the
dynamics involved in WN with a focus on dynamic soaring algorithms [3] .
Goerzen, Kong, and Mettler made rigorous comparisons between different
path planning algorithms for unmanned vehicle navigation and identified
advantages and disadvantages [4] . Manyam, et al. expanded the Dubins
path problem from two waypoints to three waypoints where all three points
and their headings are constrained [5] . Wolek and Woolsey implemented
the Dubins path problem with estimates of unsteady velocity disturbances
[6] . McGee and Hedrick considered the situations in which the disturbance
is less significant and more predictable [7] . The vehicle was placed in a
moving frame as a means of finding a piecewise continuous optimal path.
Dubins Waypoint, Navigation of Small-Class Unmanned Aerial Vehicles 211
thereby not accurately reaching the first waypoint. This leads the operator
to desire a smaller waypoint radius. However, the smaller waypoint radius
causes several problems. First, it causes high overshoot. Secondly, in the
presence of wind disturbances, it can cause a vehicle to miss the waypoint
and swirl around it.
When correcting this problem, it became important to preserve the
simplicity of prescribing waypoints without adding operator complexity.
Therefore, the DWN does not have to change anything in the foreground,
including the operator’s specified waypoints. Instead, the DWN creates
“new waypoints” that can be in the background, called turning points, as
explained in the method section.
Section 2 describes the DWN algorithm. Section3 compares performances
of WN and DWN with and without unknown wind disturbances, and the
paper finishes in Section 4 with a summary and conclusions.
METHOD
Flow Chart
This section develops the DWN. The DWN eliminates waypoint circles
and, instead, employs the new concept of turning points along with the new
criterion for updating turning points based on whether or not a turning point
is reachable. See Figure 1 showing the DWN flow chart. As shown, the
DWN approaches navigation as an optimization problem with in a local
optimization space that consists of a current point and two future waypoints.
At any instant, the vehicle seeks to reach a turning point that is determined by
minimizing fuel along the path up to the second future waypoint constituting
the horizon. The trajectory, and hence the optimization problem, is updated
once the turning point is unreachable. The reachability condition is
determined from the vehicle’s position, heading, and turning radius. For the
purposes of real-time implementation, the trajectory may be updated more
frequently than when the turning point is updated (like under the conditions
of high winds). The stated optimization problem yields desired paths that are
determined in closed-form.
The vehicle starts at W0 and heads to W1. At this point, the DWN considers
waypoints W0, W1, and W2. The other points are beyond the horizon.
Figure 3 shows the paths from W0 to W2 (left drawing) and from W1 to
W3 (right drawing).
As shown, T1 denotes the turning point associated with W1. It lies on
the line tangent to a turning circle (not to be confused with the waypoint
circle in classical WN) that has a turning radius R. The turning radius,
specified by the user, is the desirable radius of the turns around waypoints.
An optimization problem minimizes the fuel from W0 around and touching
W1 to W2. The orientation of the turning circle is free to rotate about W1 so
the optimization problem is a minimization problem of fuel expressed in
terms of the turning circle’s orientation. The fuel consumed during a turn
is greater than the fuel consumed while flying in a straight line over the
same distance. Even though the minimization is over the distance extending
to W2, the DWN updates the vehicle’s path once it reaches point T1. Thus,
the vehicle does not follow the section of the path from T1 to W2 (In Figure
3, the paths followed are solid lines and the paths not followed are dashed
lines.) The DWN recalculates that not-followed section of the path in the
next iteration. Under ideal conditions, DWN updates the planned path when
the vehicle crosses T1, at which point T1 is determined to be unreachable.
Under real conditions, disturbances can result in the vehicle not reaching
point T1 accurately. Point T1 is determined to be unreachable at a point C
that is different but nearpoint T1. Referring to the right drawing, the new
horizon is set to W3 and the new turning point T2 is determined from the
vehicle’s current point C and current direction of flight and the locations of
points W2 and W3. As shown, the optimization problem now minimizes the
fuel from C around and touching W2 to W3. The optimization problem is
now a minimization problem of fuel from C to W3 expressed in terms of the
new turning circle’s orientation. The DWN updates the iterations until the
vehicle reaches its last waypoint.
Key Parameters
Figure 4 shows the nth iteration. The vehicle is heading to Wn. A coordinate
system was set up so its x-axis points from C to Wn and its y-axis is
perpendicular to x in the direction of Wn+1. The data is scaled such that the x
and y coordinates represent non-dimensional lengths of distance divided by
turning radius R; equivalently R = 1. We perform all of the calculations after
the geometric parameters are normalized with respect to R.
Dubins Waypoint, Navigation of Small-Class Unmanned Aerial Vehicles 215
1
The x axis, because it is along the line through C and Wn, can cause β
to exceed 90˚.
As shown, α is the heading angle (between –180˚ and 180˚) and β is the
turn angle (β is between 0˚ and 90˚)1. We assume that there are no sharp turns
between specified waypoints (the angle between any two lines connecting
three consecutive waypoints is never greater than 90˚). At point C, the
vehicle turns counter-clockwise (CCW), or clockwise (CW). Likewise, it
turns around Wn CCW or CW. In total, there are four cases: CCW-CCW,
CCW-CW, CW-CCW, and CW-CW.
Optimization Problem
Figure 6 shows the geometry of the CW-CCW case. The fuel from C to
Wn+1 is a function of the orientation angle θ of the turning circle. (The other
cases, not shown, are similar.) The fuel is
(1)
where La is the total length of the two arcs (around the first and second
turning circles), Ls is the length of the two straight segments (after the first
and second turning circles), and wa and ws are corresponding weights (in
units of fuel per length). Letting wa = ws = 1 (in which the weights have units
of 1), yields the shortest path problem:
216 Unmanned Aerial Vehicles (UAV) and Drones
(2)
In the results section, we will show that the shortest path and minimum
fuel solutions are nearly indistinguishable under a broad range of conditions,
allowing the shortest path solution to approximate the minimum fuel solution.
This is important because the shortest path solution is independent of the
vehicle, increasing ease of implementation and the versatility of DWN.
θ of the turning circle, and the solution of this 3-point optimization problem
becomes a function of θ. This part of the iteration can be solved off-line
and curve-fit to determine Tn. There are two ways to fit Tn. One way is to
directly fit Tn to A, B, α, and β. The second way is to fit θ to A, B, α, and β
followed by determining Tn from θ. The second way of curve fitting the data
will inherently lead to a more accurate curve fit than the first way because
the relationship between Tn and θ is rather complicated and its analytical
relationship is available, too [13] .
unreachable, at which point it updates the turning point. Figure 7 shows the
reachability condition. As shown, a reachability zone consists of CCW and CW
circles “enclosed” on the rear by a tangent line. The reachability zone is fixed
to the vehicle. The vehicle is trying to reach waypoint Tn. At a given instant of
time, Tn could be either inside or outside the reachability zone. (In the figure,
Tn is outside of the reachability zone.). As soon as it is inside the reachability
area, Tn becomes unreachable, and the DWN updates the turning point to Tn+1.
Note that the purpose of “enclosing” the CCW and CW with the line
segment was to prevent the possibility of point C passing Tn undetected,
which would otherwise be possible because the detection is not truly
performed continuously but only at discrete instances of time.
Comparing the reachability zone used in DWN with the waypoint
circle used in WN, first notice that the reachability zone, in the absence
of disturbances, allows the vehicle to accurately reach a first waypoint, as
opposed to beginning its turn to a second waypoint before reaching the first
waypoint. Secondly, in WN, the operator can easily be fooled into the natural
desire to select a waypoint radius that is smaller than the vehicle’s turning
radius toward the goal of more closely reaching a waypoint. However, when
doing so in the presence of a wind disturbance, this can result in missing
the waypoint, finding it difficult to reach, and causing the vehicle to swirl
around the waypoint. In the DWN, no waypoint radius is considered.
Parameterization
We formulated the 3-point horizon optimization problem to keep the
computational effort minimal for real-time implementation. To further reduce
computational effort and for robustness, we parameterized the solution,
effectively reducing it to a look-up table. Toward this end, we determined
the orientation angle θ of a turning circle as a function of the four parameters
A, B, α, and β. We parameterized the orientation angle by smoothly fitting
the data to the four parameters separately for each case (CCW-CCW, CCW-
CW, CW-CCW, and CW-CW). Toward distinguishing between the different
cases, we needed to parameterize the boundaries of each of the cases in
terms of the four parameters. In particular, α and β transition from case to
case due to changes in A and B. For example, consider the graph of α versus
β shown in Figure 8 for particular parameters (A = B = 4, wa = ws = 1).
Dubins Waypoint, Navigation of Small-Class Unmanned Aerial Vehicles 219
(3)
(4)
We determined the coefficients b0i through b6i (i = 1, 2, 3) separately for
each of the cases. The real-time procedure of updating the turning points is
as follows:
1) Calculate A, B, α, and β at an instant of time from a current state
of the vehicle.
2) Calculate the coefficients associated with the boundaries between
the interior regions (See [13]).
3) Determine the interior region in which the vehicle lies.
4) Calculate the orientation angle θ of the next turning circle (See
[13]).
5) Calculate the next turning point Tn (expressed in terms of θ
analytically).
220 Unmanned Aerial Vehicles (UAV) and Drones
RESULTS
Let us continue with the 6-waypoint example and compare WN and DWN.
In WN, navigation performance depends on the minimum turning radius
and the waypoint radius. In DWN, navigation performance depends on
minimum turning radius alone. The feedback control algorithms are the same
for WN and DWN. However, the resulting overshoots differ. An illustrative
feedback control algorithm described below accounts for turning toward an
endpoint limited by the vehicle’s minimum turning radius R and serves as a
basis for the comparison between WN and DWN.
(5)
Above, ϕTi is a target angle and ϕRi is a reference angle (all of the angles
are positive counter-clockwise), g and h are control gains, and ϕT/|ϕT|
determines whether ϕR is counter-clockwise or clockwise.
The parameters used in the results, both with and without wind disturbances,
Dubins Waypoint, Navigation of Small-Class Unmanned Aerial Vehicles 221
are as follows: The minimum turning radius was R = 0.7, the time step was
0.02, the vehicle speed was v = 0.8, and the control gains were g = 0.1 and h
= 5.25. With these parameters the turning radius was 0.7.
sending it into a swirl. DWN also misses the turning point paired with
the second waypoint but moves on to complete its journey. It can do this
because it recognizes the waypoint to be unreachable. In the middle and the
right figures, the waypoint radii are larger and WN manages to navigate the
vehicles. It is worth noticing that errors exist for both WN and DWN. The
DWN errors result from using the turning points for navigation instead of
waypoints. It is clear that the DWN errors are smaller than the WN errors
due to its features already discussed, hence one still recognizes that DWN
improves the performance of the WN navigation.
Figure 12. WN with and without the DWN in the presence of a disturbance
(RWP = 0.1R, 0.5R, and R).
CONCLUSION
WN suffers from an existing trade-off between minimum turning radius
224 Unmanned Aerial Vehicles (UAV) and Drones
and waypoint radius that prevents vehicles from reaching waypoints and
closely following desired paths. This paper developed an algorithm, called
Dubins waypoint navigation (DWN) that remedies this problem by not using
waypoint circles. Instead, we introduced a reachability zone and turning
points. The DWN significantly reduces undershoot, overshoot, and swirl.
Furthermore, the algorithm remedies the problem in a way that can be hidden
from the operator to avoid confusion. The paper showed, by establishing a
horizon that includes two future waypoints, how to improve the performance
of WN in terms of the targeting of waypoints while reducing fuel and time.
Pertaining to real-time implementation, we also reduced the computational
effort, essentially eliminating it, by parameterizing the shortest path solution.
We also compared WN and DWN tracking performance (in the absence of
an unknown disturbance) and regulation performance (in the presence of an
unknown disturbance). The comparisons are illustrative of the improvements
in path following (tracking and regulation) obtained by DWN.
ACKNOWLEDGEMENTS
The authors gratefully recognize the Namibia Wildlife Aerial Observatory
(WAO) for its support of this work.
NOMENCLATURE
A = distance between current position and the following waypoint
a = coefficient for curve fitting β with respect to δ
B = distance between the following two successive waypoints
b = coefficient for curve fitting a with respect to A and B
C = current position
Fu = total fuel cost
g = proportional term for PID controller
h = derivative term for PID controller
La = total length of the arcs
Ls = total length of the straight segments
R = turning radius of the vehicle
T = target point
v = speed of the vehicle
Dubins Waypoint, Navigation of Small-Class Unmanned Aerial Vehicles 225
W = waypoint
wa = fuel cost per unit length travelled on an arc
ws = fuel cost per unit length travelled on a straight line
α = angle of heading in local system
β = angle between the following two successive waypoints in local system
γ = angular position of the first tangential point in local system
δ = angular position of the last tangential point in local system
θ = angular position of the center of the turning circle
ϕi = angle that denotes the heading in global frame at the th time instance
ϕR = angle of turning resulted from the feedback controller
ϕT = angle between heading and the target
ψ = angle of turn in one time step
CONFLICTS OF INTEREST
The authors declare no conflicts of interest regarding the publication of this
paper.
226 Unmanned Aerial Vehicles (UAV) and Drones
REFERENCES
1. Drubin, C. (2013) UAV Market Worth $8.3 B by 2018. Microwave
Journal, 56, 37.
2. Tsourdos, A., White, B. and Shannugavel, M. (2011) Path Planning
in Two Dimensions. In: Tsourdos, A., White, B. and Shanmugavel,
M., Eds., Cooperative Planning of Unmanned Aerial Vehicles, Wiley,
Chichester, 30. https://doi.org/10.1002/9780470974636
3. Ariff, O. and Go, T. (2011) Waypoint Navigation of Small-Scale UAV
Incorporating Dynamic Soaring. The Journal of Navigation, 64, 29-44.
https://doi.org/10.1017/S0373463310000378
4. Goerzen, C., Kong, Z. and Mettler, B. (2010) A Survey of Motion
Planning Algorithms from the Perspective of Autonomous UAV
Guidance. Journal of Intelligent and Robotic Systems, 57, 65-100.
https://doi.org/10.1007/s10846-009-9383-1
5. Manyam, S., Rathinam, S., Casbeer, D. and Garcia, E. (2017) Tightly
Bounding the Shortest Dubins Paths through a Sequence of Points.
Journal of Intelligent & Robotic Systems, 88, 495-511. https://doi.
org/10.1007/s10846-016-0459-4
6. Wolek, A. and Woolsey, C. (2015) Feasible Dubins Paths in Presence
of Unknown, Unsteady 5 Velocity Disturbances. Journal of Guidance
Control and Dynamics, 38, 782-786. https://doi.org/10.2514/1.
G000629
7. McGee, T. and Hedrick, J. (2007) Optimal Path Planning with a
Kinematic Airplane Model. Journal of Guidance, Control, and
Dynamics, 30, 629-633. https://doi.org/10.2514/1.25042
8. Milutinovic, D., Casbeerm, D., Cao, Y. and Kingston, D. (2017)
Coordinate Frame Free Dubins Vehicle Circumnavigation Using Only
Range-Based Measurements. International Journal of Robust and
Nonlinear Control, 27, 2937-2960. https://doi.org/10.1002/rnc.3718
9. Ketema, Y. and Zhao, Y. (2010) Micro Air Vehicle Trajectory Planning
in Winds. Journal of Aircraft, 47, 1460-1463. https://doi.org/10.2514/1.
C000247
10. Meyer, Y., Isaiah, P. and Shima, T. (2015) On Dubins Paths to Intercept
a Moving Target. Automatica, 53, 256-263. https://doi.org/10.1016/j.
automatica.2014.12.039
11. Wang, Z., Lium L., Long, T. and Xu, G. (2018) Efficient Unmanned
Aerial Vehicle Formation Rendezvous Trajectory Planningf Using
Dubins Waypoint, Navigation of Small-Class Unmanned Aerial Vehicles 227
11
Modelling Oil-Spill Detection with
Swarm Drones
Alicante, Spain
ABSTRACT
Nowadays, swarm robotics research is having a great increase due to the
benefits derived from its use, such as robustness, parallelism, and flexibility.
Unlike distributed robotic systems, swarm robotics emphasizes a large number
of robots, and promotes scalability. Among the multiple applications of such
Citation: Chen-Charpentier, Benito, Aznar, F., Sempere, M., Pujol, M., Rizo, R.,
“Modelling Oil-Spill Detection with Swarm Drones”, Hindawi Journal on Analysis and
Models in Interdisciplinary Mathematics, Volume 2014, Article ID 949407, 14 pages,
https://doi.org/10.1155/2014/949407.
Copyright: © 2014 F. Aznar et al. This is an open access article distributed under the
Creative Commons Attribution License.
230 Unmanned Aerial Vehicles (UAV) and Drones
INTRODUCTION
Maritime activity has been increasing gradually in recent years. For example,
around 42000 vessels (excluding the fishing ones) pass throughout the
North Sea, carrying products that can adversely affect the environment, such
as oil, which can produce high levels of pollution in case of being spilled
into sea. Moreover, many pollutants are accidentally spilled from ships
during “normal” operations. These spills are probably small but become
significant due to the large number of ships.Dramatic incidences of marine
pollution, such as the Prestige oil spill off the Spanish north coast [1–3],
have highlighted the potential for human-caused environmental damage. In
attempting to mitigate or avoid future damage to valuable natural resources
caused by marine pollution, research has been undertaken by the scientific
community to study the processes affecting the fate and distribution of
marine pollution and especially to model and simulate these processes.
Furthermore, active systems, able to detect and track such kind of spills, are
an invaluable tool to help to locate and clean the affected resources [4–6].
Moreover, swarm robotics is an approach to solve problems inspired by
the collective behaviour of social animals and it is focused on the interaction
of multiple robots. It is a different approach to classical artificial intelligence,
where the main goal is usually to develop behaviours that mimic human
brain function. Swarm robotics is based on the metaphor of social insects and
Modelling Oil-Spill Detection with Swarm Drones 231
(1)
GNOME simulates this diffusion with a random walk with any
distribution, with the resulting diffusion coefficient being one-half the
variance of the distribution of each step divided by the time step:
(2)
where 𝜎 is the variance of the distribution of diffused points and Δ𝑡 is
2
(3)
where 𝑡𝑖 and 𝑡𝑖−1 are the time elapsed (age; in hours) at time step 𝑖 and
the previous time step 𝑡−1, respectively, since the LEs release. 𝐻1, 𝐻2, and
𝐻3 are the half-lives of each constituent (in hours) for the pollutant, and
𝑃1, 𝑃2, and 𝑃3 are the percentages of each constituent (as decimals) for the
pollutant.
Spilled substances are modelled as point masses (up to 10, 0004 ) called
LEs (Lagrangian elements) or “splots” (from “spill dots”). Spills can be
initialized as one-time or continuous releases and as point or line sources or
evenly spaced in a grid on the map for diagnostic purposes.
Once GNOME executes a simulation, the solution is produced in the
form of a trajectory. GNOME provides two solutions to an oil spill scenario,
the best estimate trajectory and the uncertainty trajectory. The best estimate
solution shows the model result with all of the input data assumed to be
correct. The uncertainty solution allows the model to predict other possible
trajectories that are less likely to occur, but which may have higher associated
risks. In this paper we will use the uncertainty solution of pollutant particles
(represented by its LEs) for generating a continuous pollutant map. More
details of this mathematical model could be obtained in [13].
234 Unmanned Aerial Vehicles (UAV) and Drones
SWARM DESIGN
In this section we will analyse the features that must have a robotic swarm to
detect and follow an oil spill in effective way. We will also propose a specific
microscopic behaviour for this task. As mentioned above, the modelling of
this kind of pollutants is a complex task because of the interaction of many
factors. However, it is easy to extract a set of characteristics that a system
needs to locate this kind of spills.
On the one hand, the swarm must be able to detect and follow pollutants
that can change and move in time mainly by the action of advection and
diffusion. Depending on the application, two situations can be found: the
origin of the spill is known or the swarm initially makes the detection
process (in this work it is assumed that detection is one of the tasks to be
performed). On the other hand, the appearance of several polluting slicks
is very probable due, among other factors, to the transport and deposit of
sediments on the coast while the oil slick disperses and evaporates.
The behaviour of the swarm must be highly robust and tolerant to failures
and should be totally distributed. Therefore, all agents must have the same
importance when performing the behaviour, without the existence of any
agent more prominent than another. Finally, the behaviour should be highly
scalable for two reasons: robust issues and performance of the behaviour,
since as a first step it may be beneficial to use a reduced number of agents
until they find any evidence of a particular discharge.
Although in this paper behaviours will be analysed in a simulated way,
the features of agents are directly based on flying robots (our intention is
to use these algorithms in drones). These drones will have a video camera
that will use a vision algorithm to determine the existence of any oil slick
in its visual field (as presented in [4, 16]). For security reasons, we assume
that drones will fly at different heights, so that collisions in a position (x,
y) are avoided. We also assume that due to flight altitude (about 500 m
above sea level) the differences in the visual field caused by different flying
altitudes are negligible. All tests, unless otherwise specified, will be carried
out by a swarm of 200 drones. It is a medium-sized swarm, appropriate for
simulation.
Our main goal, once the behaviour is designed, is to determine the
ability of the swarm to locate, converge, and follow an oil spill. Therefore,
a macroscopic model to predict the global behaviour of the swarm and to
verify its performance will be specified. More specifically, we propose a
homogeneous behaviour, executed by all agents, consisting of three states.
Modelling Oil-Spill Detection with Swarm Drones 235
Initially, drones look for any trace of the spill in the environment. Once the
spill is detected, the drone will head to it. Finally, the drone will try to stay
inside it (to cover the slick) or in its perimeter (to mark it).
Figure 1 shows the finite state machine (FSM) that governs the behaviour
of each robot. The initial state is Wander, since we initially assume that
the position of the spill is unknown. The transition from Wander state is
performed when the agent’s visual sensor detects an oil slick. In this case,
the new state will be Resource. This state (Resource) has two possible
transitions: the spill is not detected (transition d) so the system returns to
Wander state or the amount of oil detected is >80% of the image (transition
b), so, the system switches to InResource state. The agent remains in this
state if the detected oil percentage is >80%; otherwise, the system returns to
Resource state.
Figure 1: Finite state machine (FSM) that governs the operation of each agent.
The initial state is Wander. The transition a is triggered when the agent’s visual
sensor detects an oil slick. Transition b occurs when the amount of oil detected
is >80% of the image. Transition cis triggered when the amount of oil is ≤80%
of the image. Finally, transition d is triggered if no oil is detected.
Next, each behaviour specified in the FSM will be described in more
detail.
At the beginning of the execution agents start at Wander state. At that
time they have no idea where to find an oil spill. Therefore, all directions
are equally valid and they will move randomly. The velocity of the drone at
time t is defined as
236 Unmanned Aerial Vehicles (UAV) and Drones
(4)
where rand is a random uniform vector defined within the interval of
the maximum and minimum velocity of the drone, and 𝜇1 is the coefficient
of variability on the current velocity. With values close to 1, the robot will
move in a totally random way.
If a drone detects the resource, it heads to the resource to position itself
at the perimeter or over it, depending on the desired behaviour. The velocity
of the agent is defined by three factors:
(5)
where 𝛼1 + 𝛼2 + 𝛼3 = 1 and these values define the intensity of each factor.
More specifically, k𝑐 specifies the direction of the robot. This direction
is determined by the area with the larger intensity resource average:
(6)
where 𝑆 is sensor readings that detect the resource at a given time, pos(𝑠)
is the position vector of a reading, and pos(rob) is the position of the robot.
We assume that the intensity of the resource is in the range [0, 1], where 0
is the complete lack of resource and 1 is the unambiguous detection of it.
𝛾 determines the direction of the velocity vector, depending on whether
the robot is outside or inside de resource. The aim of this variable is,
therefore, to keep the robot on the perimeter of the resource:
(7)
where 𝜂 is a threshold that determines, from the quantity of resource
detected (0 means no resource and 1 maximum quantity of resource), if the
agent is located on the perimeter. If the main objective of the system is that
drones cover the pollute slick, then (𝑆) will be defined as 1 for any set of
readings 𝑆 at a given time.
vo specifies an avoidance vector with respect to all robots detected at a
given moment:
Modelling Oil-Spill Detection with Swarm Drones 237
(8)
where 𝑅 is the set of detected robots, pos(𝑟𝑖) is the position of the detected
robot 𝑖, and pos(rob) is the position of the current robot.
Moreover, we will take into account the accuracy of the transmitted
locations: there are several factors that could make these locations not to
be optimal. We will include, therefore, a random component to model this
uncertainty in the movement of the robot: kr(𝑡) = k𝑟(𝑡−1)+rand⋅ 𝜇2, where
𝜇2 is the coefficient of variability on the velocity
Finally, when drones are located inside the spill and, therefore, the
borders of the resource are not detected, we assume that drones will develop
a random wander behaviour until they find water again because there is no
other information about which direction is better to follow:
(9)
where 𝜇3 is the coefficient of variability on the velocity.
In this section we have defined a microscopic behaviour for the detection
and tracking of an oil spill. Each agent of the swarm executes this behaviour
locally. It is a simple behaviour that can be run in low processing capacity
systems. This behaviour does not require any communication between
agents or a global positioning system.
EXPERIMENTATION
We will now present a set of experiments in order to test the operation of
the proposed microscopic behaviour. As we will see, these experiments use
simulations of oil spills based on real data.
We have simulated an oil spill in the coastal area of “Sant Antoni de
Portmany,” on Ibiza island, using an area of approximately 350 km2 . This
area has been chosen because of the various currents that affect it and its
proximity to the peninsula. Real weather data have been used for both sea
and air currents, considering the start date of the spill on April 10, 2013, at
0:00 h. We use these real data to run a simulation for seven days from the
start of the spill, simulating a ship loaded with 100000 barrels of oil. Figure
7 shows the map of the main area affected and the initial place of impact.
In order to work with data generated by GNOME we use the average of the
points of discharge generated by the application (More specifically, (𝑡)’ =
238 Unmanned Aerial Vehicles (UAV) and Drones
(10)
The previous equation shows default parameters used in simulations for
the microscopic model. These parameters have been adjusted taking into
account the microscopic behaviour experiments.
The simulation of the microscopic model has been developed using the
software MASON [17]. We use a swarm of 200 agents randomly distributed
by the environment that moves uniformly at 60 km/h. The simulation uses
small size drones (< 3 m2 ) that are able to visually detect an area of 1 km2 .
Using these parameters, several tests will be performed to verify the
correct operation of our model at local level. Initially, first tests check the
convergence of the swarm for a single static slick. Using the same example,
will be modified to cover the slick instead of marking its perimeter. Next
tests will verify the convergence of the swarm choosing an instant with
several active slicks. Finally, we will check the tracking of the movement
of the spill.
Spill Detection
First tests have been developed to detect and monitor a static slick. On the
one hand, the swarm will mark the perimeter of the slick. Figure 2 shows
Modelling Oil-Spill Detection with Swarm Drones 239
the initial distribution of agents with their geographical location (a) and the
position of agents at time 𝑡 = 15000 s and 𝑡 = 30000 s ((b) and (c)). At time
𝑡 = 30000 s the swarm completely surrounds the oil slick. Figure 3 presents
the percentage of agents over an oil slick for ten independent simulations.
As it can be appreciated, this percentage increases progressively.
Figure 2: Distribution of agents with respect to time (in seconds) for the task of
detecting the perimeter of an oil spill: (a) geographical location of the spill and
initial position of agents, (b) distribution of agents at time 𝑡 = 1500 s, and (c)
distribution of agents at time 𝑡 = 30000 s.
Figure 3: Percentage of agents on an oil slick with respect to time (in seconds).
Ten different experiments have been carried out, showing the average and vari-
ance.
240 Unmanned Aerial Vehicles (UAV) and Drones
Figure 4: Distribution of agents for the task of covering the spill ((𝑆) = 1): (a)
geographical location of the slick and initial position of agents, (b) position of
agents at time 𝑡 = 15000 s, (c) position of agents at time 𝑡 = 30000 s.
The experiments in Figure 2 show that the operation of the swarm is
correct: 50% of agents are able to detect the spill in less than 4 hours since
the beginning of the execution, marking correctly the perimeter of the spill.
However, in a real environment the appearance of several slicks is common.
This case will be analysed in the next section.
Several Spills
When an oil spill occurs, the oil may spread across kilometres generating
several oil slicks. For this reason, it is very important to verify the correct
operation of the swarm with different slicks that can spread out over a
wide area. In the simulation, at early hours, several areas containing large
proportions of pollutant are produced. For example, underwater currents can
move oil slick close to the coast, while the ship continues spilling oil into
the sea.
In order to check the operation of the microscopic model in these cases,
we have chosen an instant of the simulation where the slick has a complex
structure. Figure 5 shows the evolution of the distribution of robots. As it
Modelling Oil-Spill Detection with Swarm Drones 241
can be seen in this figure, even in this complex case and without previous
data that indicates the spill trend, our model is able to locate and mark the
perimeter of the slicks. More specifically, Figure 6 presents the percentage
of agents that are located on a polluted area. In this figure it can be seen that
the number of agents increases with time, therefore distributing uniformly
on the slick.
Figure 6: Percentage of agents on an oil slick with respect to time (in seconds)
for a complex slick. Ten different experiments have been carried out, showing
the average and variance.
242 Unmanned Aerial Vehicles (UAV) and Drones
Figure 7: Origin and temporal evolution of the spill. Several snapshots that
show the position of the spill at different instants, measured in hours from the
beginning of the spill, are shown.
Spill Movement
Figure 5 tests have shown that the swarm is able to detect and mark the
perimeter/area with complex slicks.
However, in a real spill, finding the location of the spill is as important
as effective monitoring is. In this section we will analyse the behaviour of
the swarm taking into account that the spill will spread as time progresses.
A simulation of 168 hours of the evolution of the spill has been performed,
starting April 10, 2013, at 0:00 h.
In order to simplify the simulation, we capture an image of the state of
the spill every 4.5 hours. Although the spill will experience changes in this
period, the swarm can deal with these changes.
Figure 7 shows the origin point of the spill and some captures of its
evolution.
As in previous experiments, the swarm is distributed randomly by the
environment. In this case, in the simulation process, the location of the
resource obtained by GNOME is updated every 4.5 hours. Figure 8 shows
the evolution of the swarm with respect to the state of the oil spill.
Modelling Oil-Spill Detection with Swarm Drones 243
Figure 8: Evolution of the swarm with respect to the oil slick. Multiple snap-
shots at different instants in time (measured in hours) are shown.
Figure 9 shows the percentage of drones that are on top of a resource at a
given time. In order to produce this graph, 10 different simulations have been
carried out. The graph shows the average and variance of these simulations.
In this graph a series of steps produced artificially by the simulator (because
of the sudden change from one state to the next) can be seen.
Figure 9: Percentage of agents over an oil slick with respect to time (in sec-
onds). Ten different experiments have been carried out, showing the average
and variance.
244 Unmanned Aerial Vehicles (UAV) and Drones
MACROSCOPIC MODEL
Once the microscopic behaviour has been described, it is interesting to see the
global behaviour of the swarm. There are several techniques used to analyse
this behaviour [18] such as the use of recurrence equations, generated from
a microscopic behaviour defined by a PFSM or the definition of differential
equations. However, most of these methods allow only the analysis of the
evolution of the state transitions globally.
In this work, we consider the framework proposed in [19] in order to
obtain the probability distribution of the swarm position for any time t. This
will enable us to predict, in great detail, the behaviour of the overall system.
As described by [19], once the microscopic behaviour has been defined, the
global behaviour of the system can be calculated using the Fokker-Planck
equation:
(11)
where 𝑄 is the displacement by a collision. 𝜌(r, 𝑡)𝑟𝑥𝑑𝑟𝑦 is the probability
of encountering a robot at position 𝑟 within the rectangle defined by 𝑑𝑟𝑥 and
𝑑𝑟𝑦 at time t
This equation provides a method to statistically model a swarm of robots
based on modelling techniques of multiparticle systems from the field of
quantum physics. From a Langevin equation that represents the behaviour
of a single particle, the Fokker-Planck equation is derived for all the system.
As we have already seen in [20], the Fokker-Planck equation implements
the necessary abstraction of microscopic details as described above and treats
rapidly changing parameters such as noise. The equation is still exact if this
noise is generated by a Gaussian process, that is, if it is fully determined
by the first two moments. It gives the temporal evolution of the probability
density describing the positions of the agents.
Initially, the swarm designer must specify the functions A and 𝐵 of
(11), in accordance with the desired microscopic behaviour. Function A is
a direction and describes the deterministic motion based on information
provided by the environment and the information indirectly provided by other
robots via the environment. Function 𝐵 describes the random component of
the motion. A and 𝐵 are characterized by the underlying control algorithm.
Modelling Oil-Spill Detection with Swarm Drones 245
(12)
where 𝑃 is directly related to the sensor readings in point r at time 𝑡 and
𝜇5 is a normalization term:
(13)
In (12), a function A that takes into account previous aspects is proposed.
When the density of agents increases, the probability of collision also
increases and, therefore, this situation reduces the rate at which robots are
directed by vector specified by A. More specifically, consider the following.
br is the probability that comprises the influences of all states that develop
a random or pseudorandom state. 𝑃(r, 𝑡) as commented before, is defined as
the probability of encountering a robot at position r at time 𝑡. Γ is a function
applied to the potential field that produces, for each position of 𝑃, the direction
to be taken by drones, as a single agent would do using the function 𝛾 In our
case, we use a “slidingneighbourhood” filter as commented, for example,
in [21] to perform the same calculation on 𝑃 as on the microscopic model,
changing the sign of the displacement when a ratio greater than 80% of
pollutant is detected. 𝐾 is a convolution operator. Being𝐺a square gaussian
246 Unmanned Aerial Vehicles (UAV) and Drones
(14)
Function 𝐵 describes the nondeterministic motion and, therefore, it
takes into account the random motion of agents. Two forces, which must
be considered, take part in the microscopic behaviour. On the one hand,
some influences derived from agents that are on Wander and InResource
states. These states have a random motion, depending on the intensity of
parameters 𝜇1, 𝜇3. On the other hand, the behaviour itself causes that the
environment has areas with a higher density of agents. In these areas the
probability of collision can be increased depending on the density of agents
at a given time:
(15)
Thus, in (15) two terms can be observed: 𝑏𝑟 comprises the influences
of all states that develop a random or pseudorandom state, as previously
commented, and 𝜌(r, 𝑡)𝜇4 is a term that defines the connection between the
density of robots and their probability of collision.
Spill Detection
The operation of the microscopic behaviour has been analysed in the
previous section. It has been shown that the swarm is able to locate and
mark the perimeter of an oil spill with relatively simple rules. Several tests
have been developed to verify the validity of the presented model in various
cases.
In addition to the tests above, it is possible to establish, for a given
discharge, the areas of the map with the highest probability of containing
a robot, independently of the number of agents (on condition that you use
enough of them), by using the macroscopic model. This provides a more
accurate visualization of the behaviour for large swarms without being
limited by the number of agents to be simulated.
This section, using the previously presented macroscopic definition,
presents how the swarm is able to locate and mark, in a given time, an oil
spill:
Modelling Oil-Spill Detection with Swarm Drones 247
(16)
The previous equation shows default parameters used in simulations
for the macroscopic model. These parameters have been adjusted taking
into account the microscopic behaviour experiments. We will present the
analysis of the macroscopic behaviour of the swarm for two time instants 𝑡
= (140 h, 168 h).
The simulation process is simple, once the functions A and 𝐵 of Fokker-
Planck equation have been defined. Initially, an origin point for the swarm
must be established. Although in microscopic simulations it is possible to
establish different origin points (as setting randomly the position of agents),
Fokker-Planck equation requires a single point. Nonetheless, several
simulations for different origin points have been performed, observing that
the results for large values of 𝑡 are very similar: only slight variations can
be seen if the origin point is located just on an oil slick. In this case, the
probability of this slick in relation to the rest of the spill can be higher.
The same origin point has been used for all tests, discretizing the
simulation area in 100 × 100 units. The origin point is established in Fokker-
Planck equation as (𝑥, 𝑦) = (40, 25).
Once the Fokker-Planck equation has been defined, the probability
distribution that a certain agent is in a position of the environment at a given
time can be obtained in an iterative way. This distribution can be calculated
iterating for each instant of time 𝑡 all the positions of the map, updating for
each position the new probability described by the equation.
The macroscopic state of the swarm is presented in Figure 10 for 𝑡 =
140 h. A clear convergence in the perimeter of the spill can be observed.
A three-dimensional representation of the probability distribution that an
agent is in a certain position of the environment at 𝑡 = 168 h is presented in
Figure 11. As it can be observed, the macroscopic model correctly predicts
the behaviour presented at the microscopic level of the swarm.
248 Unmanned Aerial Vehicles (UAV) and Drones
Figure 10: (a) Map (𝑀) generated from GNOME data for 𝑡 = 140 h. (b) Prob-
ability that a robot is in a certain position of the space at 𝑡 = 140 h. Sampled
using the macroscopic model of the swarm with the Fokker-Planck equation.
Model Comparison
In the previous section the behaviour of the macroscopic model and how
this model predicts the overall behaviour of the swarm have been presented.
Now, we will compare predictions of microscopic model and macroscopic
model for a specific case.
Modelling Oil-Spill Detection with Swarm Drones 249
In the same way, Figure 12(b) shows the probability distribution that predicts
the macroscopic model. In this figure it can be seen that the area of interest
is covered in both models. There are minor differences within the models,
due to the deficiencies of microscopic simulation that, among other things,
depends on the number of agents used in the simulation.
Nevertheless, we can compare both approaches by multiplying both
distributions. In this way only high probabilities remain and, therefore, it
is easier to observe if the areas of the spill are correctly identified in both
models. Bearing this in mind, we have slightly rectified the microscopic
distribution. When we use a limited number of agents in the simulation, we
discover, in some cases, that high probabilities hide important information
in the distribution.
In order to avoid this loss of information, we have used the square root
of the distribution in order to compare both models.
The product of macroscopic distribution and the square root of
microscopic distribution are presented in Figure 12(c). This figure shows
how the most important parts of the spill are detected with both distributions,
predicting the macroscopic model the same perimeter areas detected by the
microscopic model.
DISCUSSION
This paper describes a microscopic model that is able to locate and mark
the perimeter of an oil spill. The microscopic behaviour presented does not
require direct communication between agents. This limitation can cause that
the convergence of the swarm on the spill takes more time, depending on the
number of drones and the size of the spill.
However, this behaviour is versatile and easy to implement and develop,
even in areas without GPS coverage. It is important to highlight that a swarm
system, which requires direct communication between agents, is a limited
system because of the maximum range of each agent and the saturation of
the radiofrequency space if the system needs a large number of agents.
Moreover, we have demonstrated that the process of locating and marking
the perimeter of the spill without communication is robust and efficient. We
have shown that the swarm system is able to completely delimit the spill
if the number of agents is sufficient. In order to achieve this task, an agent
must be able to detect drones that are nearby. There are several ways, as, for
example, using a camera or transmitting the GPS position.
Modelling Oil-Spill Detection with Swarm Drones 251
We propose the use of signal intensity (at a given frequency) for obstacle
avoidance tasks. This strategy may show some problems (we have
implemented it by using a reactive behaviour); however, it has several
advantages. Many countries require that drones broadcast a continuous
signal indicating their position. Europe uses 433 MHz frequency for this
purpose. The intensity of the signal in a particular area can be detected by
using the same infrastructure. If the intensity of the signal grows with the
movement of the agent, this agent must change its direction. We emphasize
that, as a swarm approach, this is not a communication between agents but
simply a beacon that we can use, if necessary, to know the position of drones.
The proposed macroscopic model demonstrates that the tendency of the
swarm, for a sufficient number of drones, is the same that can be perceived in
the microscopic model. The connection of both models has been tested for a
complex spill, generated with GNOME. These experiments have shown that
the fundamental characteristics of the behaviour (detection and monitoring)
are reflected in both models. It is advisable not to forget the differences
between the two models.
The microscopic model defines the individual behaviour. Because of this
it is easy to understand at local level. However, this model does not define
the behaviour of all the swarm. In order to analyse the global behaviour, a
set of tests can be defined for a large number of agents. However, these tests
can be expensive and difficult and are not exempt from problems.
The macroscopic model defines the global behaviour of the swarm. It
allows us to verify the emergent behaviour from the interaction between all
agents that run the microscopic model. The macroscopic model demonstrates
the tendency of the swarm for a large number of agents. The analysis of
this model is complex, because of the use of differential equations that, for
example, force us to choose a single point to start the simulation. Even so, this
model has remarkable advantages [20], for example, continuous analysis for
any point of the environment, time of the probability that an agent is located
in a given location, and simulation time negligible compared to microscopic
model.
We are currently working on the implementation of this system in a real
swarm of drones. Our immediate future research focuses on this real swarm,
since it allows us to adjust the algorithms for a real system.
We are already in the testing phase for small swarms (5 drones), obtaining
satisfactory results in our preliminary tests. We are using low-cost, custom-
developed hexacopters to test this behaviour. The low computational needs
252 Unmanned Aerial Vehicles (UAV) and Drones
CONFLICT OF INTERESTS
The authors declare that there is no conflict of interests regarding the
publication of this paper.
ACKNOWLEDGMENT
This work has been supported by the Spanish Ministerio de Ciencia e
Innovación, Project TIN2009-10581.
Modelling Oil-Spill Detection with Swarm Drones 253
REFERENCES
1. J. I. Medina-Bellver, P. Marín, A. Delgado et al., “Evidence for in situ
crude oil biodegradation after the prestige oil spill,” Environmental
Microbiology, vol. 7, no. 6, pp. 773–779, 2005.
2. S. Díez, J. Sabaté, M. Viñas, J. M. Bayona, A. M. Solanas, and J.
Albaigés, “The prestige oil spill. I. Biodegradation of a heavy fuel oil
under simulated conditions,” Environmental Toxicology and Chemistry,
vol. 24, no. 9, pp. 2203–2217, 2005.
3. J. J. González, L. Viñas, M. A. Franco et al., “Spatial and temporal
distribution of dissolved/dispersed aromatic hydrocarbons in seawater
in the area affected by the prestige oil spill,” Marine Pollution Bulletin,
vol. 53, no. 5–7, pp. 250–259, 2006.
4. F. Nirchio, M. Sorgente, A. Giancaspro et al., “Automatic detection of
oil spills from sar images,” International Journal of Remote Sensing,
vol. 26, no. 6, pp. 1157–1174, 2005.
5. M. N. Jha, J. Levy, and Y. Gao, “Advances in remote sensing for oil
spill disaster management: state-of-the-art sensors technology for oil
spill surveillance,” Sensors, vol. 8, no. 1, pp. 236–255, 2008.
6. C. Brekke and A. H. S. Solberg, “Oil spill detection by satellite remote
sensing,” Remote Sensing of Environment, vol. 95, no. 1, pp. 1–13,
2005.
7. M. Dorigo, E. Tuci, R. Groß et al., “The swarm-bot project,” in
Kunstliche Intelligenz, pp. 31–44, Springer, Berlin, Germany, 2005.
8. M. Dorigo and E. Şahin, “Autonomous Robots: guest editorial,”
Autonomous Robots, vol. 17, no. 2-3, pp. 111–113, 2004.
9. G. Beni, “From swarm intelligence to swarm robotics,” in Swarm
Robotics, pp. 1–9, Springer, Berlin, Germany, 2005.
10. E. Şahin, “Swarm robotics: from sources of inspiration to domains
of application,” in Swarm Robotics, vol. 3342 of Lecture Notes in
Computer Science, pp. 10–20, 2005.
11. O. Soysal and E. Şahin, “A macroscopic model for self-organized
aggregation in swarm robotic systems,” in Swarm Robotics, pp. 27–42,
Springer, Berlin, Germany, 2007.
12. W. Liu, A. F. T. Winfield, and J. Sa, “Modelling swarm robotic systems:
a case study in collective foraging,” in Proceedings of the Towards
Autonomous Robotic Systems (TAROS ‘07), pp. 25–32, 2007.
254 Unmanned Aerial Vehicles (UAV) and Drones
12
A Survey of Modelling and
Identification of Quadrotor Robot
China
2
School of Automation and Electrical Engineering, University of Science
and Technology Beijing, Beijing 100083, China
ABSTRACT
A quadrotor is a rotorcraft capable of hover, forward flight, and VTOL and
is emerging as a fundamental research and application platform at present
with flexibility, adaptability, and ease of construction. Since a quadrotor is
basically considered an unstable system with the characteristics of dynamics
Citation: Xiaodong Zhang, Xiaoli Li, Kang Wang, and Yanjun Lu, “A Survey of
Modelling and Identification of Quadrotor Robot”, Hindawi Journal on Abstract
and Applied Analysis, Volume 2014 |Article ID 320526 | 16 pages | https://doi.
org/10.1155/2014/320526.
Copyright: © 2014 Xiaodong Zhang et al. This is an open access article distributed
under the Creative Commons Attribution License.
256 Unmanned Aerial Vehicles (UAV) and Drones
INTRODUCTION
A quadrotor is agile to attain the full range of motion propelled by four
rotors symmetrically across its center with smaller dimension and simple
fabrication, unlike a conventional helicopter with complicated mechanism.
Generally, it should be classified as a rotary-wing aircraft according to
its capability of hover, horizontal flight, and vertical take-off and landing
(VTOL) [1]. In 1920s, the prototypes of manned quadrotors were introduced
for the first time [1, 2]; however, the development of this new type of air
vehicle is interrupted for several decades due to various reasons such as
mechanical complexity, large size and weight, and difficulties in control
especially. Only in recent years a great deal of interests and efforts have
been attracted on it; a quadrotor has even become a more optional vehicle
for practical application, such as search-and-rescue and emergency response
amazingly. As a small, unmanned aerial vehicle (UAV), it has versatile
forms from 0.3 to 4 kg. Up to now, some large quadrotors already have
sufficient payload and flight endurance to undertake a number of indoor
and outdoor applications, like Bell Boeing Quad TiltRotor and so forth
[3]. With the improvements of high energy lithium battery, MEMS sensor
and other technologies, especially, the scope for commercial opportunities
is rapidly increasing [4]. As a quadrotor is inexpensive and easy to be
designed and assembled, as well as the complex dynamics, such a rotorcraft
is emerging as a fundamental research platform for aerial robotics research
for the problems related to three-dimensional mobility and perception [5].
Furthermore, a quadrotor’s design and implementation have even become a
Multidisciplinary Engineering Undergraduate Course nowadays for the aim
to teach students to cope with the challenges, for instance, fast and unstable
dynamics and tight integration of control electronics and sensors [6].
For the specific purposes including academic research, commercial
usage, and even military aim, many research groups or institutions have
fabricated various quadrotors, such as the X4-flyer [7], OS4 [8], STARMAC
A Survey of Modelling and Identification of Quadrotor Robot 257
[9], and Pixhawk [10] which have become the shining stars mentioned on
the network, magazines, and all kinds of academic journals. It is worthy to
note that the Draganflyer X4, Asctec Hummingbird, Gaui Quad flyer, and
DJI Wookong have been introduced and developed in the comprehensive
commercial market. For the powerful operation, some new types of quadrotors
with tilting propellers or a new configuration, have been constructed in [10–
15] in order to address the issues such as underactuated system. In addition,
a number of OSPs (open- source projects) for quadrotors have emerged with
contributions from RC hobbyists, universities, and corporations [16].
A quadrotor helicopter is a highly nonlinear, multivariable, strongly
coupled, underactuated, and basically an unstable system (6 DOF with only
4 actuators), which acts as preliminary foundation for design of control
strategy. Many controllers have been presented to overcome the complexity
of the control resulting from the variable nature of the aerodynamic forces
in different conditions of flight [17]. Many works have been published on
control issues about quadrotors, such as PID controllers [18–21], linear
quadratic LQR algorithm [22, 23], 𝐻∞ loop forming method [24], sliding
mode variable structure control, feedback linearization [25, 26], backstepping
[27, 28], and even intelligent control [29, 30]. In those works above, the
linearization of the nonlinear model around hover fight regime is conducted
and used to construct controller to stabilize the quadrotor’s attitude under
small roll and pitch angles. The treatments to the vehicle dynamics, based
on some simplistic assumptions, have often ignored known aerodynamic
effects of rotorcraft vehicles. In the case of hovering and forward flight with
slow velocity, those assumptions are approximately reasonable.
As the quadrotor research shifts to new research areas (i.e., mobile
manipulation, aerobatic moves, etc.) [31, 32], the need for an elaborate
mathematical model arises, and the simplistic assumption is no more suitable.
When aggressive maneuvers such as fast forward and heave flight actions,
VTOL, and the ground effect appear, the dynamics of quadrotors could
be influenced significantly under these aerodynamic force and moment.
It is shown in [33] that existing techniques of modeling and control are
inadequate for accurate trajectory tracking at higher speed and in uncertain
environments if aerodynamic influence is ignored. The model incorporated
with a full spectrum of aerodynamic effects that impact on the quadrotor in
faster climb, heave, and forward flight has become an area of active research
with considerable effort focusing on strategies for generating sequences of
controllers to stabilize the robot to a desired state.
258 Unmanned Aerial Vehicles (UAV) and Drones
CHARACTERISTICS OF QUADROTOR
Typically, the structure of a quadrotor is simple enough, which comprises
four rotors attached at the ends of arms under a symmetric frame. The
dominating forces and moments acting on the quadrotor are given by rotors
driven with motors, especially BLDC motors. According to the orientation
of the blades, relative to the body coordinate system, there are two basic
types of quadrotor configurations: plus and cross-configurations shown in
Figure 1.
Euler-Lagrange Formalism
The generalized coordinates of the rotorcraft are given in [46]:
(1)
where (𝑥, 𝑦, 𝑧) = 𝜉 ∈ 𝑅 denotes the position of the mass center of the
3
quadrotor relative to the inertial frame and (𝜓, 𝜃, 𝜙) = 𝜂 ∈ 𝑅3 are the three
Euler angles (resp., yaw, pitch, and roll), under the conditions (−𝜋 ≤ 𝜓 ≤
𝜋) for yaw, (−𝜋/2 ≤ 𝜃 ≤ 𝜋/2) for pitch, and (−𝜋/2 ≤ 𝜙 ≤ 𝜋/2) for roll, which
represent the orientation of the rotorcraft (see Figure 3).
(2)
The translational and the rotational kinetic energy of the rotorcraft are
(3)
where 𝑚 denotes the mass of the quadrotor. 𝐽 = W 𝐼W is the moment of
𝑇
inertia matrix in the inertial coordinate system after being transformed from
the body frame, by matrix W:
(4)
The only potential energy to be considered is the gravitational potential
given by 𝑈 = 𝑚𝑔z𝐸.
The Lagrangian of the rotorcraft is
(5)
The full rotorcraft dynamics model is derived from the Euler-Lagrange
equations under external generalized forces:
(6)
where 𝐹𝜉 = 𝑅𝐹̂ is the translational force applied to the quadrotor due to
the throttle control input, 𝜏∈𝑅3 represents the pitch, roll, and yaw moments
and 𝑅 denotes the rotational matrix 𝑅(𝜓, 𝜃, 𝜙) ∈ SO(3), which represents
the orientation of the rotorcraft relative to a fixed inertial frame.
Since the Lagrangian contains no cross-terms in the kinetic energy
combining ̇ 𝜉 and 𝜂̇, the Euler-Lagrange equation partitions into two parts.
One obtains
262 Unmanned Aerial Vehicles (UAV) and Drones
(7)
(8)
Rewrite (8) as
(9)
where 𝐶(𝜂, 𝜂)̇ is referred to as the Coriolis terms and contains the
gyroscopic and centrifugal terms.
Newton-Euler Formalism
Typically, it is necessary to define two frames of reference, each with its
defined righthanded coordinate system, as shown in Figure 3. 𝑋, 𝑌, and 𝑍
are orthogonal axes of the body-fixed frame with its correspondent body
linear velocity vector 𝑉⃗ = [𝑢V 𝜔]𝑇 and angular rate vector Ω⃗ = [𝑝 𝑞 𝑟] 𝑇.
Another one is an Earth-fixed inertial (also known as navigation) coordinate
system 𝐸 = (𝑋𝐸, 𝑌𝐸, 𝑍𝐸) with which initially the bodyfixed coincides. The
attitude of the quadrotor, expressed in terms of the Euler angles 𝜙 (roll),
𝜃 (pitch), and 𝜓 (yaw), is evaluated via sequent rotations around each one
of the inertial axes. Herein, a reference frame by 𝑂NED (North-EastDown)
denotes an inertial reference frame and 𝑂𝐵 a bodyfixed reference frame.
Generally, a quadrotor is considered as a rigid body in a three-dimensional
space. The motion equations of a quadrotor subject to external force 𝐹∈𝑅3
and torque 𝜏 ∈ 𝑅3 are given by the following Newton-Euler equations with
respect to the body coordinate frame 𝐵 = (𝑋𝐵, 𝑌𝐵, 𝑍𝐵):
(10)
The rotorcraft orientation in space is presented by a rotation 𝑅 from 𝐵
to 𝐸, where 𝑅 ∈ SO3 is the rotation matrix. Here 𝑐𝜃 is for cos(𝜃) and 𝑠𝜃 is
for sin(𝜃):
(11)
With the transformation R, the first equation assessing the translational
A Survey of Modelling and Identification of Quadrotor Robot 263
(12)
Recall the kinematic relationship between the generalized velocities
and the angular velocity Ω=𝑊𝜂,̇ 𝑊∈𝑅3×3.
Defining a pseudoinertia matrix (𝜂) = 𝐽𝑊and a Coriolis vector 𝐶(𝜂, 𝜂) =
𝐼 ̇ 𝜂+𝑊̇ 𝜂×𝐼 ̇ 𝜂, one can obtain
(13)
This model has the same structure as the one obtained by the Euler-
Lagrange approach, in which the main difference is the expressions of 𝐼 and
𝐶, which are more complex and more difficult to implement and to compute
in the case of the Euler-Lagrange method. It is important to note that the
model (13) is common for all aerial robots with six degrees of freedom.
(14)
where 𝑚 is the mass of the quadrotor, f ≜ (𝑓𝑥 𝑓𝑦 𝑓𝑧) 𝑇 is the total
𝑏
the angular velocity of the airframe with respect to the inertial frame. Since
the control force is computed and applied in the body coordinate system,
and since 𝜔 is measured in body coordinates, (14) is expressed in body
coordinates, where k𝑏 ≜ (𝑢, V, 𝜔) and w𝑏 𝑏/𝑖 ≜ (𝑝, 𝑞, 𝑟)𝑇.
For rotational motion, Newton’s second law state is
(15)
where h is the angular momentum and m is the applied torque. h𝑏 = Jw𝑏
𝑏/𝑖
; J is the constant inertia matrix. The quadrotor is essentially symmetric
about all three axes, which implies that J = diag(𝐽𝑥, 𝐽𝑦, 𝐽𝑧).
Given m𝑏 ≜ (𝜏𝜙, 𝜏𝜃, 𝜏Ψ) 𝑇, which denote the rolling torque, the pitching
torque, and the total yawing torque, are induced by the rotor thrust and rotor
drag acting on the airframe.
The six-freedom-degree model for the quadrotor kinematics and
dynamics can be summarized as follows:
(16)
Equation (16) is a full nonlinear model for a quadrotor, in which the
complex dynamics is shown obviously, such as strong nonlinearity like the
multiplication between system states, intensive coupling among the variables,
and the multivariable features intuitively, that imposes the difficulties on the
A Survey of Modelling and Identification of Quadrotor Robot 265
controller design and, on the other hand, attracts great interest of research.
(17)
The rolling torque, the pitching torque, and the total yawing torque are
given by
(18)
The gravity force acting on the center of mass is given by
(19)
Equation (16) shows strong coupled dynamics [48]: the speed change of
one rotor gives rise to motion in at least 3 degrees of freedom.
For instance, the speed decrease of the right rotor will roll the craft to
the right under the imbalance between left and right lift forces, coupled with
the rotorcraft’s yaw to the right due to the imbalance in torque between
clockwise and counter-clockwise, so the translation changes direction
toward the front.
Nevertheless, in some cases that the rotating movement is slight,
the Coriolis terms 𝑞𝑟, 𝑝𝑟, and 𝑝𝑞 are small and can be neglected. So the
dynamics of the quadrotor is simplified and given as [47]
266 Unmanned Aerial Vehicles (UAV) and Drones
(20)
This model is shown in Figure 4 to which two diagrams in [49, 50] are
similar. Note that the attitude of quadrotor is changed, subject to the input 𝜏
(moment) produced by each rotor. However, the position/altitude dynamics
block is affected by 𝑇𝑗 and angle variables.
State space equations are applied in the control design and system
identification generally. Hence, the nonlinear system of a quadrotor is
illustrated as the formulation, which is described in different manner in [3,
27, 51]:
(21)
where
(22)
Herein the output 𝑦 is composed of 𝑥, 𝑦, 𝑧 and 𝜓 is for the trajectory
track, but if for the hovering control, 𝑦 = [𝜙, 𝜃, 𝜓, 𝑍]𝑇 should be selected
because in translation movement shown in (22), the three state variables, 𝑥,
𝑦, and 𝑧, are subordinated to the same control parameter 𝐹; hence only one
state is controllable and the others are subjected to the controlled translation
and angular motions.
Gyroscopic Torques
At the normal attitude, namely, Euler angles are zero and the axes of the
rotors with higher speeds spinning are coincident with the 𝑧𝐵 axis of the
robot frame. However, while the quadrotor rolls or pitches, the direction of
the angular momentum vectors of the four motors is forced to be changed.
A gyroscopic torque will be imposed on the airframe that attempts to turn
the spinning axis so that it aligns with the precession axis. It is noted that no
gyroscopic torque occurs with rotation around the 𝑧𝐵 axis (yaw) because the
spin and precession axes are already parallel [52].
The gyroscopic (inertial) moment is modeled in [53] as
(23)
where 𝐽⃗ 𝑟 is the gyroscopic inertia, namely, that of the rotating part of
the rotor and Ω𝑖 is the angular rate of the rotor 𝑖 (𝑖 = 1, 2, 3, 4).
268 Unmanned Aerial Vehicles (UAV) and Drones
Linearized Model
The full nonlinear model is very useful, as it provides insight into the behavior
of the vehicle. However, a linear model is used widely, which attributes to
the abundance of well-studied tools available for the control system design.
As we can see, most of the controllers are based on the nonlinear model with
hover conditions and are stable and effective only around reasonably small
angles.
Typically, the linearization of a nonlinear state space model 𝑥 = (𝑥) +
𝑔(𝑥)𝑈 ̇ is executed at an equilibrium point of the model
(24)
Then, the linear model is derived by
(25)
As the hovering is one of the most important regimes for a quadrotor, at
this point, the condition of equilibrium of the quadrotor in terms of (24)-(25)
is given as in [54]:
(26)
A Survey of Modelling and Identification of Quadrotor Robot 269
(27)
and have integral-action designed for the system stabilization despite of the
issues of controller switch between two linear systems.
AERODYNAMIC EFFECTS
In most of research projects, quadrotor dynamics has often ignored known
aerodynamic effects of rotorcraft vehicles because only the stability while
hovering is the aim, as stated before. At slow velocities, such as while
hovering, this is indeed a reasonable assumption [61]. However, in case
of demanding flight trajectories, such as fast forward and descent flight
manoeuvres, as well as in the presence of the In Ground Effect, these
aerodynamic phenomena could significantly influence quadrotor’s dynamics
[33], and the performance of control will be diminished if aerodynamic
effects are not considered [62], especially in situations where the aircraft is
operating at its limits (i.e., carrying heavy load, single engine breakdown,
etc.).
Acting as a propulsion system, the aerodynamics of rotors plays the most
important role on the movement of the quadrotor excepted with gravity and
air drag with respect to the airframe. The kinematics and dynamics of the
rotors are fairly complex, resulting from the combination of several types of
motion, such as rotation, flapping, feathering, and lagging; normally the last
two items are neglectable [3, 4]. The theoretical models based on the blade
element theory (BET) combined with momentum theory (MT) show many
advantages such as more flexible, more simple, and convenient in contrast
with the empirical models based on empirical data typically obtained in the
wind tunnel.
Note that the application of helicopter theory to a quadrotor is not
straightforward for the reason of many important differences between
conventional helicopter and quadrotor [1]. In order to address the issues,
the specific research, with the aim at a quadrotor vehicle, is necessary to
establish full model with complex dynamics subject to aerodynamic forces
and moments. Many works [33, 40, 62–66] on rotor model have been done
based on the results obtained for conventional helicopters [67].
Blade flapping is of significant importance in understanding the natural
stability of quadrotors [4]. Since it induces forces in the x-y rotor plane
of the quadrotor, the underactuated directions in the dynamics, high gain
control cannot easily be implemented against the induced forces. On the
other hand, the total thrust variation owing to the vertical maneuver also
imposes nonignorable influence on the quadrotor behavior.
A Survey of Modelling and Identification of Quadrotor Robot 271
Total Thrust
In case of simplifications in aerodynamic effects, the assumption that a
rotor’s thrust is proportional to the square of its angular velocity is the most
common consideration. However, it is proved that this assumption about
rotor’s thrust is especially far from reality in the cases of nonhovering
regime.
The helicopter literatures [67–69] give analysis about many effects on
the total thrust in more detail, in which translation lift and change of angle of
attack act as the two related effects. As a rotorcraft flighs across translation,
the momentum of the airstream induces an increase in lift force, which is
known as translational lift. The angle of attack (AOA) of the rotor with
respect to the free-stream also influences the lift, with an increase in AOA
increasing thrust, just like in aircraft wings.
Applying blade element theory to quadrotor construction, the expression
for rotor thrust 𝑇 is given in [56]:
(28)
and a thrust coefficient 𝐶𝑇 is given in
(29)
where 𝜇, 𝜆𝑖, and 𝜆𝑐 are speed coefficients 𝑉𝑥𝑦/𝑅Ω, 𝑉𝑧/𝑅Ω, and V𝑖/𝑅Ω,
respectively. 𝑉𝑥𝑦 and 𝑉𝑧 are the horizontal and vertical components of the
total air stream velocity, respectively, and V𝑖 is the induced velocity, and Ω
is rotor angular speed. Herein, the other parameters and coefficients in the
formulation above will not be described and refer to the literature [56].
Especially at hovering regime, 𝜇=0 and 𝜆𝑐 = 0 (i.e., static conditions)
yield
(30)
So one can obtain 𝑇⊂Ω just like the relationship between 𝑇 and Ω used
2
One can solve this problem by means of calculating the induced velocity
coefficient 𝜆𝑖 involved in the two aerodynamic principals, momentum, and
blade element theories. In view of the fact that the macroscopic momentum
equation and the microscopic blade element equation give the same rotor
thrust formulation:
(31)
Blade Flapping
When a rotor translates horizontally through the air, the advancing blade of
the rotor has a higher velocity relative to the free-stream and will generate
more lift than the retreating blade which sees a lower effective airspeed.
This causes an imbalance in lift, inducing an up and down oscillation of the
rotor blades, which is known as blade flapping [63, 64]. When a quadrotor
is in steady state suffering the blade flapping, its rotor plane will tilt at some
angle off of vertical, causing a deflection of the thrust vector illustrated in
Figure 6.
A Survey of Modelling and Identification of Quadrotor Robot 273
(32)
where ℎ is the vertical distance from the rotor plane to the center of
gravity of the vehicle and 𝑇 is the thrust.
In addition, in the case of stiff rotors without hinges at the hub, there is
also a moment 𝑀𝑏, generated directly at the rotor hub from the flapping of
the blades:
(33)
where 𝑘𝛽 is the stiffness of the rotor blade in Nm/rad. The total
longitudinal moment created by blade flapping 𝑀𝑏𝑓 is the sum of these two
moments:
274 Unmanned Aerial Vehicles (UAV) and Drones
(34)
Although a controller designed exactly is possibly successful to
counteract small disturbances, it is difficult to reject the large systematic
disturbances that result from the aerodynamic effects such as blade flapping.
For the improvement of control performance, it is necessary to design a
feed forward compensation block in order to cancel out moments and forces
resulting from blade flapping and variations in total thrust [33].
(35)
Here 𝑅 is the radius of the rotor, 𝑧 is the vertical distance from the
ground, 𝑇 is the thrust produced by the propeller in ground effect, and 𝑇∞ is
the thrust produced at the same power outside of ground effect.
Note that for 𝑧/𝑅 = 2 the predicted ratio between 𝑇 and 𝑇∞ is just 1.016.
Therefore, this formula (35) predicts that ground effect is negligible when
the rotor is more than one diameter off the ground, that is, 𝑧/𝑅 > 2.
Except for the ground effects, a “ceiling effect” is another issue needs
to be researched, the reason is a quadrotor can flight indoor different from
conventional helicopter.
In fact, so called “ceiling effect” means when the vehicle is close to an
overhead plane, the ceiling effect pulls the vehicle towards the ceiling which
can cause a crash in the worst case.
The effects have been proved by a set of experiments. Unfortunately, no
formal formulation of a ceiling effect is presented so far.
A Survey of Modelling and Identification of Quadrotor Robot 275
Parameter Identification
Using the linearized system dynamics after some treatments such as
neglecting the nonlinear coupling terms, a parameter identification [85] is
performed to identify separately each quadrotor axis performed in closed
loop. The generic scheme of the identification process is depicted in Figure
7.
(36)
where the state derivative 𝑢̇ is linearly related to the regressors, 𝑢, 𝑞,
𝜃, and 𝛿lon, by their corresponding parameters, 𝑋𝑢, 𝑋𝑞, 𝑋𝜃, and 𝑋lon. This
process of regressor pool correlation is repeated for each state derivative.
After the appropriate model structure has been determined, the next
step is to determine the value and error of each parameter by a linear least
squares method. These values form the dynamics and control matrices, 𝐴
and 𝐵. At the same time, the error values are also adjusted to account for any
remaining uncharacterized behavior, known as colored residuals.
The method herein is considered as the basic algorithm in the realm of
system identification, which is used to address the model identification issue
of linear system. It should be noted that the method is desirable for the SISO
(single input and single output) system; therefore, such characteristics as
cross-coupling must be mitigated in advance.
Frequency-Domain Identification
A frequency-domain system identification method is used to obtain a linear
representation of the quadrotor dynamics [87]. Contrast to time domain
analysis, frequency-domain identification can obtain a relative robust model
with the treatment of cutting down the errors associated with bias effects and
processing noise, resulting in a robust model.
In the algorithm, the frequency response data acquired is validated by
evaluating its coherence, which is an indication of how well the output and
input data are correlated. The definition of coherence is given as
A Survey of Modelling and Identification of Quadrotor Robot 279
(37)
where represent the autospectral densities
of the input, output, and cross-spectral density of the input and output,
respectively, and are the frequency point. A perfect correlation between input
and output would result in a coherence value of unity, while poor coherence
typically falls below a value of 0.6.
It might also be noted that the data must be decoupled such that the
inputs provided by off-axis commands are rejected from the output on the
axis of interest, after the coherence of the data is validated. The multiple
single output system estimation can be expressed in (38), where 𝐻̂ is the
system estimation:
(38)
In the system identification process, the transfer functions of each axis
will be acquired first, followed by state space representations and complete
system analysis. The single input-single output (SISO) transfer function
identification cost function can be defined as
(39)
where the parameters such as 𝑛𝜔, 𝜔1 refer to [87].
As shown in [87], based on the rational experimental setup, a frequency-
domain system identification method obtains a linear representation of the
quadrotor dynamics. It might also be noted that the choice of the periodic
excitation signal is to minimize leakage in the computation of frequency
spectra, which is still an open problem in the area.
(40)
where 𝑥∈𝑅 , 𝑢∈𝑅 , and 𝑦∈𝑅 are, respectively, the state, input, and
𝑛 𝑚 𝑝
output vectors and 𝑤∈𝑅𝑛 and V ∈ 𝑅𝑝 are the process and the measurement
noise, respectively, modeled as Wiener processes with incremental
covariance given by
(41)
The system matrices 𝐴, 𝐵, 𝐶, and 𝐷, of appropriate dimensions, are such
that (𝐴, 𝐶) is observable and ([𝐵, 𝑄1/2], ) is controllable. Assume that a data
set (𝑡𝑖), 𝑦(𝑡𝑖), 𝑖[1, 𝑁] of sampled input/output data obtained from (41) is
available. Then, the problem is to provide a consistent estimate of the state
space matrices 𝐴, 𝐵, 𝐶, and 𝐷 on the basis of the available data.
Note that both model order and the tuning parameters of the identification
algorithm (i.e., the position of the Laguerre pole a and the parameters of
the PBSIDopt algorithm) need to be achieved at the head of the procedure;
herein, a crossvalidation approach, explained in detail in the literature [77],
is used to address the issue.
As can be observed from the experiments, in which the input signal
adopted for identification experiments is the so-called 3211 piecewise
constant sequence, the identified models capture the essential features of
the response of the quadrotor along all the axes. As it is known that the
SMI method is rapid and easy to use, however, the model deserved from
the algorithm is not based on some kind of optimal criteria, so the model
obtained is also not believed to be optimum.
UKF Method
As we know, if the systems have severe nonlinearities, EKF can be hard
A Survey of Modelling and Identification of Quadrotor Robot 281
to tune and often gives unreliable estimation due to the linearization relied
by the EKF in order to propagate the mean and covariance of the states.
Therefore, UKF is applied for the identification of a quadrotor model [88].
For the quadrotor system with continuous-time system dynamics,
(42)
Since the state vector of the quad-rotor
(43)
the system state equation can be derived according to the full nonlinear
equations. Finally, based on all of the system equations, the parameters to be
estimated and identified are formulated as follows:
(44)
Based the experiments, the error of the estimation for velocity at 𝑥-axes
is less than 0.001, while the errors at both 𝑦- axes and 𝑧-axes are less than
0.0015. For estimation of angular velocities at 𝑥, 𝑦, and 𝑧-axes, the errors
are less than 0.0015. From the errors computed, it can be concluded that
the UKF output matches with the measured output and the measured noise
is well filtered by the UKF. It has good convergence time and relatively
reliable for the estimations.
(45)
where 𝑛𝑦, 𝑛𝑢, 𝑚, and ℎ are the order of the system, 𝑑 is delay factor of
system, and 𝑒(𝑡) is white noise. A quadrotor system is a nonlinear system
with four inputs and 3 outputs, and the inputs are the input electric voltage
of four rotors, that is, 𝑈(𝑡) = [𝑉𝑓(𝑡) 𝑉𝑟(𝑡) 𝑉𝑙(𝑡) 𝑉𝑏(𝑡)]𝑇, and the three outputs
are pitch, roll, and yaw angles, respectively, that is, 𝑌(𝑡) = [𝑝(𝑡) 𝑟(𝑡) 𝑦(𝑡)]𝑇
and 𝑤(𝑡) = [𝑝(𝑡) 𝑟(𝑡)]𝑇. A set of simulation tests show that the error of RBF-
ARX model is most close to a normal distribution, which indicates that the
good model is obtained. In addition, no matter how the state variable (𝑡)
slides, the distribution of system pole does not go beyond the stable scope.
So the RBF-ARX model is suitable for quadrotor.
Black Models
Data-Based Model
The main purpose of data-based techniques is to take full advantage of the
information acquired from huge amounts of process measurements. Without
recourse to physical models obtained from first principles, a relatively
overall perspective of system performance could be revealed via available
measurements. Through deep insights of process measurements, information
like system characteristics and regularity can be dug out for optimal
modeling and decision making. The description [91] aforementioned reveals
the insight and application of data-based model identification schemes.
It is noted that several typical data-based approaches, which only depend
on process measurements, principal component analysis (PCA), partial least
squares (PLS), and their variants, are successfully utilized in many areas
[92–96], In the realms of model and control, iterative learning control (ILC)
scheme, and model free adaptive control (MFAC)—in essence, model free
methods—show great advantages without a priori knowledge about the
underlying processes, such as time delay and system order, despite their
potential limit for processes with high complexity.
284 Unmanned Aerial Vehicles (UAV) and Drones
CONFLICT OF INTERESTS
The authors declare that there is no conflict of interests regarding the
publication of this paper.
ACKNOWLEDGMENTS
This work was supported by the Fundamental Research Funds for the Central
Universities (Grant no. FRF-TP-12-005B), Program for New Century
Excellent Talents in Universities (Grant no. NCET-11-0578), Aeronautical
Science Foundation of China (Grant no. 2012ZD54013), and Specialized
Research Fund for the Doctoral Program of Higher Education (Grant no.
20130006110008).
286 Unmanned Aerial Vehicles (UAV) and Drones
REFERENCES
1. V. Martinez, Modeling of the flight dynamics of a quadrotor Helicopter
[M.S. thesis], Cranfield University, Cranfield, UK, 2007.
2. M. Y. Amir and V. Abbass, “Modeling of quadrotor helicopter
dynamics,” in Proceedings of the International Conference on Smart
Manufacturing Application (ICSMA ‘08), pp. 100–105, Kintex, April
2008.
3. G. Warwick, “US Army looking at three configuration concepts for
large cargo rotorcraft,” September 2013, http://www.flightglobal.com/
news/articles/us-army-looking-at-three-configuration-concepts-for-
large-cargo-rotorcraft-217956/.
4. R. Mahony, V. Kumar, and P. Corke, “Multirotor aerial vehicles:
modeling, estimation, and control of quadrotor,” IEEE Robotics and
Automation Magazine, vol. 19, no. 3, pp. 20–32, 2012.
5. R. Mahony and V. Kumar, “Aerial robotics and the quadrotor,” IEEE
Robotics and Automation Magazine, vol. 19, no. 3, p. 19, 2012.
6. I. Gaponov and A. Razinkova, “Quadcopter design and implementation
as a multidisciplinary engineering course,” in Proceedings of the 1st
IEEE International Conference on Teaching, Assessment, and Learning
for Engineering (TALE ‘12), pp. B16–B19, IEEE, Hong Kong, China,
August 2012.
7. N. Guenard, T. Hamel, and R. Mahony, “A practical visual servo control
for an unmanned aerial vehicle,” IEEE Transactions on Robotics, vol.
24, no. 2, pp. 331–340, 2008.
8. S. Bouabdallah and R. Siegwart, “Towards intelligent miniature flying
robots,” in Tractsaction in Advanced Robotics, vol. 25, pp. 429–440,
Springer, Berlin, Germany, 2006.
9. G. Hoffmann, D. G. Rajnarayan, S. L. Waslander, D. Dostal, J. S. Jang,
and C. J. Tomlin, “The stanford testbed of autonomous rotorcraft for
multi agent control (STARMAC),” in Proceedings of the 23rd Digital
Avionics Systems Conference (DASC ‘04), vol. 2, pp. 12.E.4–121.10,
Salt Lake City, Utah, USA, October 2004.
10. L. Meier, P. Tanskanen, F. Fraundorfer, and M. Pollefeys, “PIXHAWK:
a system for autonomous flight using onboard computer vision,” in
Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA ‘11), pp. 2992–2997, IEEE, Shanghai, China, May
2011.
A Survey of Modelling and Identification of Quadrotor Robot 287
Robots and Systems (IROS ‘07), pp. 153–158, San Diego, Calif, USA,
November 2007.
53. C. A. Herda, Implementation of a quadrotor unmanned arial vehicle
[M.S. dissertation], California State University, Los Angeles, Calif,
USA, 2012.
54. J. M. B. Domingues, Quadrotor prototype [M. S. dissertation], Instituto
Superior Tecnico, Göttingen, Germany, 2009.
55. A. Tayebi and S. McGilvray, “Attitude stabilization of a VTOL
quadrotor aircraft,” IEEE Transactions on Control Systems Technology,
vol. 14, no. 3, pp. 562–571, 2006.
56. N. A. Chaturvedi, A. K. Sanyal, and N. H. McClamroch, “Rigid-body
attitude control,” IEEE Control Systems Magazine, vol. 31, no. 3, pp.
30–51, 2011. | MathSciNet
57. V. Mistler, A. Benallegue, and N. K. M’Sirdi, “Exact linearization and
noninteracting control of a 4 rotors helicopter via dynamic feedback,”
in Proceedings of the 10th IEEE International Workshop on Robot and
Human Communication, pp. 586–593, Paris, France, September 2001.
58. J. P. How, B. Bethke, A. Frank, D. Dale, and J. Vian, “Real-time
indoor autonomous vehicle test environment,” IEEE Control Systems
Magazine, vol. 28, no. 2, pp. 51–64, 2008. | MathSciNet
59. M. J. Stepaniak, A quadrotor sensor platform [Ph.D. dissertation],
Ohio University, Athens, Ohio, USA, 2008.
60. A. Nanjangud, “Simultaneous low-order control of a nonlinear
quadrotor model at four equilibria,” in Proceedings of the IEEE
Conference on Decision and Control, pp. 2914–2919, Florence, Italy,
December 2013.
61. R. K. Agarwal, Recent Advances in Aircraft Technology, InTech,
Rijeka, Croatia, 2012.
62. G. M. Hoffmann, H. M. Huang, S. L. Waslander, and C. J. Tomlin,
“Quadrotor helicoper flight dynamics and control- theory and
experiment,” in Proceedings of the AIAA Guidance, Navigation and
Control Conference and Exhibit, p. 6461, Hilton Head, SC, USA, 2007.
63. P. Pounds, R. Mahony, J. Gresham, P. Corke, and J. Roberts, “Towards
dynamically favourable quad-rotor aerial robots,” in Proceedings of
the Australasian Conference on Robotics and Automation, Canberra,
Australia, December 2004.
64. G. M. Hoffmann, S. L. Waslander, and C. J. Tomlin, “Quadrotor
292 Unmanned Aerial Vehicles (UAV) and Drones
13
Visual Flight Control of a Quadrotor
Using Bioinspired Motion Detector
Citation: Lei Zhang, Tianguang Zhang, Haiyan Wu, Alexander Borst, and Kolja Küh-
nlenz, “Visual Flight Control of a Quadrotor Using Bioinspired Motion Detector”, In-
ternational Journal on Navigation and Observation, Volume 2012 |Article ID 627079, 9
pages, https://doi.org/10.1155/2012/627079.
Copyright: © 2012 Lei Zhang et al. This is an open access article distributed under the
Creative Commons Attribution License
296 Unmanned Aerial Vehicles (UAV) and Drones
ABSTRACT
Motion detection in the fly is extremely fast with low computational
requirements. Inspired from the fly’s vision system, we focus on a real-time
flight control on a miniquadrotor with fast visual feedback. In this work,
an elaborated elementary motion detector (EMD) is utilized to detect local
optical flow. Combined with novel receptive field templates, the yaw rate
of the quadrotor is estimated through a lookup table established with this
bioinspired visual sensor. A closed-loop control system with the feedback
of yaw rate estimated by EMD is designed. With the motion of the other
degrees of freedom stabilized by a camera tracking system, the yaw-rate of
the quadrotor during hovering is controlled based on EMD feedback under
real-world scenario. The control performance of the proposed approach
is compared with that of conventional approach. The experimental results
demonstrate the effectiveness of utilizing EMD for quadrotor control.
INTRODUCTION
Flying insects have tiny brains and mostly possess compound eyes which can
get panoramic scene to provide an excellent flying performance. Comparing
with state-of-the-art artificial visual sensors, the optics of compound eye
provide very low spatial resolution. Nevertheless, the behavior of flying
insects is mainly dominated by visual control. They use visual feedback to
stabilize flight [1], control flight speed, [2] and measure self-motion [3].
On the other hand, highly accurate real-time stabilization and navigation
of unmanned aerial vehicles (UAVs) or microaerial vehicles (MAVs) is
becoming a major research interest, as these flying systems have significant
value in surveillance, security, search, and rescue missions. Thus, the
implementation of a bio-plausible computation for visual systems could be
an accessible method to replace the traditional image processing algorithms
in controlling flying robots such as a quadrotor.
Most of early applications using insect-inspired motion detector focus
on motion detection tasks rather than velocity estimation. In robotics
and automation applications, EMDs are mainly used for a qualitative
interpretation of video image sequence, to provide general motion
information such as orientation and infront obstacles. In [4], a microflyer
with an onboard lightweight camera is developed, which is able to fly
indoor while avoiding obstacles by detecting certain changes in optic flow.
The recent approach for the navigation in a corridor environment on an
autonomous quadrotor by using optical flow integration is shown in [5].
Visual Flight Control of a Quadrotor Using Bioinspired Motion Detector 297
Figure 1: The simple Reichardt motion detector (a) and the elaborated EMD
model (b). A and B are two visual signal inputs. The moving luminance signal
is observed by this pair of visual sensors (such as the ommatidia of a fruit fly).
The detector compares visual sensor inputs and generates a direction sensitive
response R corresponding to the visual motion. D: delay block. Here it refers to
low-pass filter; HP: High-pass filter; M: multiplication.
In [12], a mathematical analysis of the original Reichardt motion detector
is given regarding the response to different images (sinusoidal gratings as
well as natural images). Without loss of generality, we firstly consider the
response of this modified model to a moving natural image (which possesses
energy at all spatial frequencies). Similar to the response of the simplified
model [12], for this modified model, the output is
Visual Flight Control of a Quadrotor Using Bioinspired Motion Detector 299
Figure 2: Rotation templates: horizontal (a) and vertical (b). The response of
horizontal rotation is calculated by subtracting the upper part and lower part of
the template of receptive fields, while the vertical rotation is detected by the dif-
ference between left part and right part of the global optical flow.
The algorithms for calculating the rotation global response can be
described as (image size: length × width; RH: response of local horizontal
motion; RV : response of local vertical motion):
(2)
Now we examine the feasibility of using this model for velocity
estimation tasks. In [12], two criteria are quantified for an accurate velocity
estimation system: (1) image motion at a fixed velocity should always have
300 Unmanned Aerial Vehicles (UAV) and Drones
(3)
speed between the two counterrotating rotor pairs. Increasing the rotating
speed of all the four motors at the same amount will cause an upward
movement of the quadrotor. When tuning the differential speed between the
two motors of either single rotor pair, the quadrotor will fly sideways.
Figure 3: Quadrotor dynamics. (a) the overhead view of a quadrotor; (b) six
degrees of freedom of the flying robot.
The work in this section is based on the former related work in [18]. In
this work, we set up an indoor GPS system by using multicamera tracking
instead of the former two-camera tracking. By tracking the four markers
installed on the axis of the quadrotor, the 3D position as well as the pose
of the flying robot can be estimated. The experimental setup of 3D tracking
is further introduced in Section 5. The frame of quadrotor dynamics is the
same as the in Figure 3(b). We have the following definition:
Marker i position vector: Si = (xi, yi, zi) T (i = 1, 2, 3, 4).
Central point vector between two nonadjacent markers: Mj = (xMj , yMj ,
zMj) T (j = 1, 2);
Estimated central point vector of the quadrotor: Mq = (xM, yM, zM) T ;
Orientation of marker i: Vi = (xVi , yVi , zVi) T (i = 1, 2, 3, 4);
The counting of the markers is clockwise, while the first marker is on
the main axis. For 3D pose control, the central point should be used as
the reference position of the quadrotor. The central points of the distance
between marker 1 and marker 3 as well as between marker 2 and marker
4 are M1 = (1/2)(S1 + S3) and M2 = (1/2)(S2 + S4). In consequence of the
marker’s noise through the tracking system, the two central points in the two
equations above are not identical. Thus, the central point of the quadrotor
is Mq = (1/2)(M1 +M2). The vectors between the central point and marker 1
for pitch as well as between the central point and marker 2 for roll are Vi =
302 Unmanned Aerial Vehicles (UAV) and Drones
Si − Mq (i = 1, 2). The values of pitch θ, roll φ, and yaw ψ angles can be then
calculated, and thus the 3D pose of the quadrotor can be estimated:
(4)
(5)
(6)
CONTROLLER
At first the quadrotor should be regulated to hover in the air on horizontal
plane with little shaking. That means, the stable state commands (us p for
pitch, us r for roll, us y for yaw, and us t for thrust) should be adjusted firstly.
Basing on these parameters, the control commands can be calculated next.
For each controller, we have an output value (u q p for pitch, u q r for roll,
u q y for yaw, and u q t for thrust) between −1 and 1, which is then added
with the corresponding stable state command. Since we only consider the
rotation movement in this experiment, which means, the quadrotor is not
always heading with the main axis towards X direction, the pitch and roll
commands should be adjusted in the ψ direction with a rotation matrix:
(7)
We choose proportional-integral (PI) controller for the yaw rate control.
The pitch, roll, and thrust commands are controlled by proportional-integral-
derivative (PID) controllers:
Visual Flight Control of a Quadrotor Using Bioinspired Motion Detector 303
(8)
In this experiment, the reference values θ0 and φ0 are set to zero. The
desired altitude z0 is 0.35 m near hover. The measured values θ and φ are
calculated from the received data of the visual multicamera tracking system
using (4) and (5), whereas ψ˙ is searched out from a certain empirical lookup
table using the response value, which is calculated by insect-inspired motion
detectors. The yaw velocity can also be obtained from (6) by time derivative,
which is, for the heading stabilization, used as a reference (ground truth).
The closed-loop results will be shown and further discussed in Section 5.
The main loop in the whole software architecture (Figure 4) consists of
two simultaneous processes: yaw rate control using on-board camera visual
feedback and X, Y, and Z position/poses control using visual tracking system.
A graphical user interface (GUI) is developed basing on Qt crossplatform
application, which provides data visualization (e.g., 3D trajectory of the
quadrotor, battery voltage and sensory data information), commands input
and real-time online configuration of control parameters.
Experimental Setup
The whole experimental platform is shown in Figure 5. It mainly consists
of a quadrotor testbed, off-board workstation, and video camera tracking
system.
Velocity Estimation
To validate the designed templates of receptive fields for rotation detection,
the bioinspired image processing algorithm is implemented with C++
language using Open CV.
Under low velocities and within certain altitude range, the response can
be regarded as monotonic and near linear from the test results. In this work,
the yaw rate is under 100 deg/s and the altitude value is set to 0.35 m. The
lookup table is shown in Figure 6(a), with a polynomial curve fitting. From
the comparison in Figure 6(b), this approach provides a fairly accurate yaw
rate estimation (the mean error is 1.85 deg/s and the standard deviation of
error is 10.22 deg/s). This lookup table could be then used in the closed-loop
control under the same light condition.
306 Unmanned Aerial Vehicles (UAV) and Drones
Figure 6: (a) lookup Table (LUT) in this work. This empirical LUT shows the
relationship between yaw rate (in low speed) and visual response under the ex-
perimental environment; (b) open-loop characteristics for rotation motion near
hover (without control).
Heading Stabilization
In this experiment, we compare the heading control performance using EMD
with those using IMU or tracking system respectively. At first the stable
commands (us p, us r, us y , and us t) should be determined experimentally,
so that without any controllers off board, the quadrotor can be hovering in
the air nearly on a horizontal level and rotating as little as possible, with all
the payloads mounted (in this experiment, with on-board bread board for
tracking system using TCM8 mode, and with cable power supply instead
of battery). The X, Y, and Z positions should be further controlled using the
feedback from the tracking system, while the yaw position has no controllers
Visual Flight Control of a Quadrotor Using Bioinspired Motion Detector 307
except the on-board IMU at the first attempt. The IMU controller is already
integrated on the base board so that no other off-board controller is needed
for IMU controlling. A major disadvantage of using IMUs for navigation is
that they typically suffer from accumulated error (see Figure 7 blue curve).
In this case, in about 20 seconds the yaw position will deviate by 30 degrees
if only IMU is used for the heading stabilization. The second reference is the
yaw position with the control using tracking system, but without using EMD
(the red curve in Figure 7). In Figure 8, the 3D position when using EMD
for heading stabilization is shown. By using tracking system and EMDs (the
green curve in Figure 7) a satisfying performance could be both achieved.
Despite some deviation (for tracking system maximal ±5 degrees and for
EMD maximal ±7 degrees), the quadrotor can hover very well with straight
heading direction.
4 seconds and the maximal error is about ±10 degrees/s. The inflight
performance of 3D pose is shown in Figures 9(b) and 9(c). Since the IMU
provides only the angular velocity values, the angle positions are integrated
by the base control board on the quadrotor in order to get the angle positions,
which are sent to the central workstation.
Figure 9: Attitude measurements of the flying robot near hover with yaw-rota-
tion (a): yaw rate control. After takeoff, the quadrotor is switched to a desired
yaw rate of 30 deg/s; (b) and (c): pitch and roll angles of the quadrotor.
For velocity estimation, although the EMD is not a pure velocity
detector, a closed-loop control of yaw rate is achieved with restrictions of
the structured environment and the limitation of velocity in low-speed area.
Including image translation delay through IEEE1394/USB cable, image
processing by CPU costs 10–20 ms (the program takes nearly 10 ms to wait
for images from the camera, while the actual computing time is only several
milliseconds). So the EMD computing is extremely fast, which provides an
evidence of the efficiency when using biological models.
310 Unmanned Aerial Vehicles (UAV) and Drones
ACKNOWLEDGMENTS
This work is supported in part by the DFG excellence initiative research
cluster Cognition for Technical Systems-CoTeSys, see also www.cotesys.
org, the Bernstein Center for Computational Neuroscience Munich, see also
http://www.bccn-munich.de/, and the Institute for Advanced Study (IAS),
Technische Universität München, see also http://www.tum-ias.de/.
Visual Flight Control of a Quadrotor Using Bioinspired Motion Detector 311
REFERENCES
1. M. Egelhaaf and A. Borst, “Motion computation and visual orientation
in flies,” Comparative Biochemistry and Physiology A, vol. 104, no. 4,
pp. 659–673, 1993.
2. M. V. Srinivasan and S. W. Zhang, “Visual navigation in flying insects,”
International Review of Neurobiology, vol. 44, pp. 67–92, 2000.
3. H. G. Krapp and R. Hengstenberg, “Estimation of self-motion by optic
flow processing in single visual interneurons,” Nature, vol. 384, no.
6608, pp. 463–466, 1996.
4. A. Beyeler, J.-C. Zufferey, and D. Floreano, “3D vision-based
navigation for indoor microflyers,” in Proceedings of the IEEE
International Conference on Robotics and Automation (ICRA ‘07), pp.
1336–1341, April 2007.
5. J. Conroy, G. Gremillion, B. Ranganathan, and J. S. Humbert,
“Implementation of wide-field integration of optic flow forautonomous
quadrotor navigation,” Autonomous Robots, vol. 27, no. 3, pp. 199–
200, 2009.
6. F. Ruffier and N. Franceschini, “Optic flow regulation: the key to
aircraft automatic guidance,” Robotics and Autonomous Systems, vol.
50, no. 4, pp. 177–194, 2005.
7. N. Franceschini, F. Ruffier, and J. Serres, “A bio-inspired flying robot
sheds light on insect piloting abilities,” Current Biology, vol. 17, no. 4,
pp. 329–335, 2007.
8. F. Expert, S. Viollet, and F. Ruffier, “Outdoor field performances of
insect-based visual motion sensors,” Journal of Field Robotics, vol.
28, no. 4, pp. 529–541, 2011.
9. E. Buchner, “Behavioural analysis of spatial vision in insects,” in
Photoreception and Vision in Invertebrates, M. A. Ali, Ed., pp. 561–
621, Plenum Press, New York, NY, USA, 1984.
10. M. Egelhaaf and W. Reichardt, “Dynamic response properties of
movement detectors: theoretical analysis and electrophysiological
investigation in the visual system of the fly,” Biological Cybernetics,
vol. 56, no. 2-3, pp. 69–87, 1987.
11. M. V. Srinivasan, S. W. Zhang, M. Lehrer, and T. S. Collett, “Honeybee
navigation en route to the goal: visual flight control and odometry,”
Journal of Experimental Biology, vol. 199, no. 1, pp. 237–244, 1996.
12. R. O. Dror, D. C. O’Carroll, and S. B. Laughlin, “Accuracy of velocity
312 Unmanned Aerial Vehicles (UAV) and Drones
14
Development of Rescue Material
Transport UAV (Unmanned Aerial
Vehicle)
ABSTRACT
Recently, the market for drones is growing rapidly. Commercial UAVs
(Unmanned Aerial Vehicles, or drones) are increasingly being used for
various purposes, such as geographic survey, rescue missions, inspection
of industrial facilities, traffic monitoring and delivery of cargos and goods.
In particular, the drones have great potential for life-saving operations. A
missing person, for example, can be rapidly and effectively searched using
a drone in comparison with the conventional search operations. However,
there is no commercially available rescue UAV until now. The motivation for
this study is to design an unmanned aerial vehicle capable of vertical takeoff
and landing, while containing a high power propellant apparatus in order to
lift a heavy cargo that contains rescue materials (such as water, food, and
medicine). We used the EDF (Electric Ducted Fan) technology as opposed
to the conventional motor and prop combination. The EDF can produce
the power about three times higher than the motor-prop combination. This
became suitable for transportation of rescue goods, and can be widely used
in rescue operations in natural environments. Based on these results, the
UAV for rescue material transport capable of heavy vertical takeoff and
landing is developed, including airframe, flight control computer and GCS
(ground control station).
Keywords: High-Powered Propellant, Vertical Take-Off and
Landing, Rescue Drone, GCS (Ground Control Station), FCC (Flight
Control Computer)
INTRODUCTION
In recent years, the use of drones or UAVs (unmanned aerial vehicles) is
increasingly getting popular around the world. Drones are utilized in aerial
photography for both personal hobbies and commercial uses. In industrial
sectors, drones are deployed in facility inspection, power line inspections,
and monitoring of fire and flood purposes. In almost every imaginable part,
the drones are being used [1] [2] . The rescue operation is in no exception.
Due to its fast flying characteristics, which can cover a wide area in less
time, drones are now being used by law enforcement agencies for search
missing persons. The current rescue drones are typically equipped with
thermal image cameras; hence it can detect the missing person from the
surroundings, even though the missing person is not visually identifiable from
the surrounding environment. However, there is no rescue drones that can
carry the emergency materials, such as water, food, blanket, and medicine,
and being capable of conducting rescue operations [3] [4] [5] . In order to
do this, the drones must have a high power propellant technology that is
different from the conventional motor and prop combination. Additionally,
the rescue drone should be able to vertically takeoff and land, while carrying
the rescue materials. Therefore, the motivation of this study is to develop a
rescue drone that is capable of conducting a search and rescue operation with
Development of Rescue Material Transport UAV (Unmanned ... 317
item from the box in the lower part of the drone. At this time, according
to different rotation method of EDF motor and propeller motor, the safety
protection of the finger is different. In other words, EDF drone do not have
a risk of injuring your fingers unless you intentionally put your fingers into
the turbine housing. However, in propeller motors, the propeller itself is
located outside the drones, so there is a risk that the fingers can be cut any
time. This means that if a propeller motor is used, the drones used for the
rescue may cause secondary damage. Because of this, we have chosen a high
thrust EDF, instead of conventional moto and prop combination. Electronic
Ducted-Fan type airplane itself consists of engine, propeller, fixed wing and
flap. Theoretically, moving the fluid flow from the rotating body to the duct
increases the thrust efficiency by 41%. However, the weight of ducts to be
increased in comparison with the propeller should also be considered. The
aim of this study is to lift a total weight of about 10 kg, which has four
EDFs, each of which can maintain the duration of 50% of the maximum
thrust at a weight of up to 18 kg, as shown in Figure 1.
Battery Selection
The battery is a major factor in controlling the drones’ flight time and
output. The larger the capacity of the battery, the longer the flight time, and
the higher the number of cells in a series connected battery. The selected
EDF requires a higher voltage than the propeller and motor combination,
so we select a battery with 6 - 8 cells. The battery has a high stability and
uses a lithium ion polymer material. This is because the electrolyte of the
lithium ion battery is a liquid and basically has a problem in stability. We
also considered trade-offs in battery capacity and weight. Electric vehicles’
biggest dilemma is the proportional relationship between the capacity and
weight. When a heavy battery is used in a
Figure 1. Selected EDF: Highly safe, theoretically efficient EDF around the
wing with Duct. Maximum thrust per EDF 4.5 kg × 4 = 18 kg Expectations,
excluding frame and self weight-Estimated thrust at least 8 kg/900 g × 4 = 3.6
kg (left), EDF Specification (right).
Development of Rescue Material Transport UAV (Unmanned ... 319
Frame Design
Four EDFs are designed in four directions to balance the drones. Fixtures for
installing ESC (electric speed controller), battery holder, GPS transceiver
and other fixtures are installed on the center top plate. In addition, a fixture
for installing a structural goods transport box is installed in the lower part,
and two supports for stably loading and unloading the drones are installed.
The material of the frame was made of carbon material to minimize the
weight of the drone, and some aluminum parts were also used, as shown
in Figure 3. A cylindrical EDF fixture using a 3D printer was designed as an
EDF dedicated frame for fixing each EDF. In the end, thanks to the frame
made of carbon material, it was able to create a frame that can withstand a
high-output EDF, while lightening the weight as much as possible.
Figure 2. Selected batteries: Selecting a battery that can supply the appropriate
voltage for the EDF output (400 g × 2 = 800 g).
Figure 6. Developed drone with structural box and relief items and its specifi-
cation.
T = thrust [N]
D = propeller diameter [m]
ν = velocity of air at the propeller [m/s]
Δν = velocity of air accelerated by propeller [m/s]
ρ = density of air [1.225 kg/m3]
The thrust ratio was calculated by excluding the air velocity (v) related
factor through the thrust formula. And when using a propeller, it was calculated
that four times as much thrust was possible. This is because the diameter of the
propeller has the greatest influence on thrust. To develop an EDF that produces
Development of Rescue Material Transport UAV (Unmanned ... 323
the same thrust as a 4× long propeller, it required 16 times the revolutions per
second, which drastically reduced battery life. In addition, it required a high
voltage of 30 V compared to a propeller requiring 22.2 V. In the case of a
commercially available battery, the maximum voltage was 6s (3.7 V × 4 =
22.2 V). Finally, the weight of the EDF itself was 900 g, totaling 3.6 kg. It
was about 9 times the weight of the propeller and I think it was the biggest
weight gain factor in the airframe design that we are trying to reduce 1g. In
order to compare the exact thrust between the propeller and the EDF, I think
it would be necessary to consider vortex elimination and air compression rise
through the duct. In order to actually introduce EDF, which has the advantage
of safety, we need three things: 1, lighter weight, 2. efficiency, and 3. propeller
size [13] [14] [15] [16] . Figures 8-12 shows the flight characteristics data, all
of which shows a very stable flight characteristics.
CONCLUSION
UAVs capable of high-power, vertical takeoff and landing can complement
the disadvantages of conventional UAV’s lack of power and maximize
utilization throughout the industry. Through this technology development, we
localize the core technology of UAV that can control high power and vertical
flight stably. As the demand for UAV increases in the future, it will lay the
foundations for developing various types of UAVs (courier, transportation,
delivery, lifeguard, etc.) that meet customer demand. Expected effects of
meeting domestic demand, import substitution and export will be created. In
the future, we will complement the problems analyzed in this study and design
a new type of drone. In the end, we will develop more than 25 kg payload so
that it can be utilized for life-saving and transportation of rescue materials.
ACKNOWLEDGEMENTS
This work was supported by the Ajou University research fund.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest.
Development of Rescue Material Transport UAV (Unmanned ... 325
REFERENCES
1. Yu, S. and Kwon, Y. (2017) Development of VTOL Drone for Stable
Transit Flight. Journal of Computer and Communications, 5, 36-43.
https://doi.org/10.4236/jcc.2017.57004
2. Yu, S. and Kwon, Y. (2017) Development of Multi-Purpose, Variable,
Light Aircraft Simulator. Journal of Computer and Communications, 5,
44-52. https://doi.org/10.4236/jcc.2017.57005
3. Jo, D. and Kwon, Y. (2017) Analysis of VTOL UAV Propellant
Technology. Journal of Computer and Communications, 5, 76-82.
https://doi.org/10.4236/jcc.2017.57008
4. De la Torre, G.G., Ramallo, M.A. and Cervantes, E.G. (2016) Workload
Perception in Drone Flight Training Simulators. Computers in Human
Behavior, 64, 449-454.
5. Liu, Z. and Li, H. (2006) Research on Visual Objective Test Method
of High-Level Flight Simulator. Xitong Fangzhen Xuebao (Journal of
System Simulation).
6. Luan, L.-N. (2013) Augmenting Low-Fidelity Flight Simulation
Training Devices via Amplified Head Rotations. Loughborough
University.
7. Lee, S. and Lim, K. (2008) PCB Design Guide Book. Sehwa Publishing
Co., South Korea.
8. Kang, M. and Shin, K. (2011) Electronic Circuit. Hanbit Media, South
Korea.
9. Kim, D., Kim, J. and Yoon, S. (2016) Development and Validation
of Manned and Unmanned Aircraft Simulation Engine for Integrated
Operation in NAS. Journal of the Korean Society for Aeronautical &
Space Sciences, 44, 423-430.
10. Sponholz, C.-B. (2015) Conception and Development of a Universal,
Cost-Efficient and Unmanned Rescue Aerial Vehicle. Techische
Hochschule Wildau Technical University of Applied Sciences.
11. Guerrero, M.E., Mercado, D.A. and Lozano, R. (2015) IDA-PBC
Methodology for a Quadrotor UAV Transporting a Cable-Suspended
Payload. 2015 International Conference on Unmanned Aircraft
Systems (ICUAS), Denver, CO, 9-12 June 2015, 470-476. https://doi.
org/10.1109/ICUAS.2015.7152325
12. Gomez, C. and Purdie, H. (2016) UAV-Based Photogrammetry and
Geocomputing for Hazards and Disaster Risk Monitoring—A Review.
326 Unmanned Aerial Vehicles (UAV) and Drones
15
Railway Transport Infrastructure
Monitoring by UAVs and Satellites
ABSTRACT
Improving the rail transport security requires development and implementation
of neoteric monitoring and control facilities in conditions of increasing speed
and intensity of the train movement and high level of terrorist threat. Use
of Earth remote sensing (ERS), permitting to obtain information from large
Citation: Ivashov, S., Tataraidze, A., Razevig, V. and Smirnova, E. (2019), “Railway
Transport Infrastructure Monitoring by UAVs and Satellites”. Journal of Transporta-
tion Technologies, 9, 342-353. doi: 10.4236/jtts.2019.93022.
Copyright: © 2019 by authors and Scientific Research Publishing Inc. This work is li-
censed under the Creative Commons Attribution International License (CC BY). http://
creativecommons.org/licenses/by/4.0
328 Unmanned Aerial Vehicles (UAV) and Drones
INTRODUCTION
The railway transportation safety demands constant attention from the staff
and management of company that operates the facility. This applies to the
locomotive drivers’ condition monitoring, as well as to all services related
to ensuring the smooth operation of the railway: manifestation that is called
the human factor. These problems can be solved by a set of organizational
and technical measures that improve discipline and working conditions of
employees.
Unfortunately, there are still some technical factors, such as violation
of the railway line integrity (rail rupture, destruction of railway arrow, etc.),
which also require permanent monitoring. The another thing to mention is
a problem of countering terrorist acts with the use of improvised explosive
devices, which can lead to even more dire consequences. The use of remote
sensing means such as piloted helicopters and drones could render substantial
assistance in determining the scale of a railway catastrophe and ways of its
elimination, including assistance to possible victims [1] [2] [3].
There is one more task connected to the topic of improving the safety
of rail transportation. These are incidents related to natural phenomena
including mudflows and avalanches, heavy snowfalls in the plains, and some
others. Remote monitoring such phenomena could effectively help in its
prediction and consequences management. To monitor and control the state
of snow cover on the mountain slopes in avalanche prone areas, the UAV
could be successfully used for reconstructing three-dimensional images
of the terrain in the summer (without snow cover) and in the winter after
heavy snowfalls. After comparison with such images. it will be possible to
determine the height of the snow cover, and UAVs equipped with passive
radar (radiometer) operated in centimeter or decimeter wavelengths will
provide an opportunity to measure snowpack humidity [4]. These data could
Railway Transport Infrastructure Monitoring by UAVs and Satellites 329
Figure 1. Helicopter-type UAV: (a) The UAV during the flight with a camera
mounted in the gimbal; (b) The operator with the remote control of the UAV on
the background.
Railway Transport Infrastructure Monitoring by UAVs and Satellites 331
parameters that the satellite on this orbit, passes over any point of the
earth’s surface at about the same local solar time. Consequently, the angle
of sun over the earth’s surface is approximately the same at all paths of
the satellite above the same earth’s place. Permanent lighting conditions
are very convenient for use in satellites receiving optical images of the
earth’s surface including remote sensing and meteorological satellites. The
parameters of the sun-synchronous orbits fall within the range: the height
above the earth’s surface is 600 - 800 km, the period of rotation is 96 - 100
min, and the inclination of the orbit is about 98˚.
The main advantage of the remote sensing optical satellites is the ability
to obtain high-quality multispectral images, which resolution can reach 0.3
m or lesser. Nevertheless, visual spectrum satellites are not able to function
in night or cloudy weather. While the first shortcoming can still be potentially
overcome, for example, by using infrared optics, the cloudiness, which in
mid-latitudes in winter can hide the earth for weeks or even months, makes
impossible using even infrared satellites. All-weather satellites are radar
ones, even though they have the worst resolution. At the same time, radar
provides additional opportunities, for example, obtaining images of the
recorded signal in different polarizations or finding underground objects [7].
timely and relevant information about the event or object of interest. For
getting actual information, you should contact the specialized organizations
and place the order for recording satellite information. This is a chargeable
service but it allows the customer getting information according to his own
needs: time, place and conditions of photography, resolution of images and
other parameters.
The information may be provided as up-to-date one, i.e. recently received
but also can be extracted from the database accumulated from previous
years. The latter is important for information used in environmental studies,
where it is significant to compare changes in the landscape over long period.
Figure 2. Image of the railway experimental railway ring from Google Maps
free database.
Figure 3. Large-scale satellite image of the water tower area on the experimen-
tal railway ring according to Google Maps.
The same area near the water tower was shot from the board of the
UAV equipped with a Canon EOS 5D Mark II camera with EF-S 17-55
f/2.8 IS USM lens. Camera was mounted on a gimbal controlled by an
operator, Figure 1. The result of the photographing from flight altitude about
70 m is shown in Figure 4. The difference between these two images consists
in the location of the rail wagons on the tracks near the water tower that
caused by different time of the shooting. The satellite image was recorded
in summer, and drone flights were performed in autumn. They also differ
Railway Transport Infrastructure Monitoring by UAVs and Satellites 335
in size and direction of the local objects shadow, as the images were taken
at different positions of the sun on the celestial sphere. This fact stresses
the importance of sun-synchronous orbits, which were mentioned earlier
and are commonly used for remote sensing satellites. Another advantage of
using sun-synchronous orbits is the ability to determine whether an object
is being built or not, and at what speed it is being built, judging by the
length of the shadow at the images obtained from different orbit passes of
the satellite. Usually this method is used for the interpretation of images
recorded by military reconnaissance satellite, but it also can be used for
cadastral surveying, when it is necessary to allocate from a large array of
images new and illegally erected buildings. In that case, to determine the
newly erected buildings and their height change it is enough to subtract the
images.
If we compare the resolution of the two above photos, it is obvious that
the UAVs image has a much better resolution than satellite one. This is clear
since the altitude of the orbits of remote sensing satellites is about 600 km
in comparison with the same parameter for drones that equals a few hundred
meters or several kilometers. UAVs have another advantage over satellites
because they are less dependent on weather conditions. In the presence of
cloudiness, it is impossible to shoot in the visible spectrum from satellites.
While UAVs can hover below the lower cloud boundary and take pictures
of the terrain.
Figure 4. Photograph of the water tower area on the ERR recorded by the UAV.
336 Unmanned Aerial Vehicles (UAV) and Drones
CONCLUSION
The analysis of the possibilities of using remote sensing satellites and
UAVs equipped with various sensors showed that they could be an
340 Unmanned Aerial Vehicles (UAV) and Drones
effective monitoring tool for ensuring traffic safety on the railways. Their
information also can be used for solving other tasks, for example in the
design and construction of railways infrastructure. The paper shows that the
effectiveness of the Earth remote sensing can be increased by combining
information recorded by satellites and UAVs, and with the use of the modern
image processing means, it becomes possible to build synthetic three-
dimensional images of railway infrastructure.
ACKNOWLEDGEMENTS
The study was performed with financial support of the Russian Foundation
for Basic Research, Project No. 17-20-02086. The authors express their
gratitude to the staff of the Experimental Railway Ring for assistance in
conducting of UAV flights for obtaining the necessary information.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest.
Railway Transport Infrastructure Monitoring by UAVs and Satellites 341
REFERENCES
1. Flammini, F., Pragliola, C. and Smarra, G. (2016) Railway Infrastructure
Monitoring by Drones. International Conference on Electrical
Systems for Aircraft, Railway, Ship Propulsion and Road Vehicles &
International Transportation Electrification Conference, Toulouse, 2-4
November. https://doi.org/10.1109/ESARS-ITEC.2016.7841398
2. Eyre-Walker, R.E.A. and Earp, G.K. (2008) Application of Aerial
Photography to Obtain Ideal Data for Condition Based Risk
Management of Rail Networks. The 4th IET International Conference
on Railway Condition Monitoring, Derby, 18-20 June 2008. https://
doi.org/10.1049/ic:20080353
3. Francesco, F., Naddei, R., Pragliola, C. and Smarra, G. (2016) Towards
Automated Drone Surveillance in Railways: State-of-the-Art and
Future Directions. International Conference on Advanced Concepts
for Intelligent Vision Systems, Lecce, 24-27 October 2016, 336-348.
https://doi.org/10.1007/978-3-319-48680-2_30
4. Schwank, M. and Naderpour, R. (2018) Snow Density and Ground
Permittivity Retrieved from L-Band Radiometry: Melting Effects.
Remote Sensing, 10, 354. https://doi.org/10.3390/rs10020354
5. Skofronick-Jackson, G., Kim, M., Weinman, J.A. and Chang, D.-
E. (2004) A Physical Model to Determine Snowfall Over Land by
Microwave Radiometry. IEEE Transactions on Geoscience and Remote
Sensing, 42, 1047-1058. https://doi.org/10.1109/TGRS.2004.825585
6. Besada, J.A., Bergesio, L., Campaña, I., Vaquero-Melchor, D., López-
Araquistain, J., Bernardos, A.M. and Casar, J.R. (2018) Drone Mission
Definition and Implementation for Automated Infrastructure Inspection
Using Airborne Sensors. Sensors, 18, 1170. https://doi.org/10.3390/
s18041170
7. Koyama, C.N. and Sato, M. (2015) Detection and Classification of
Subsurface Objects by Polarimetric Radar Imaging. IEEE Radar
Conference, Johannesburg, 27-30 October 2015, 440-445. https://doi.
org/10.1109/RadarConf.2015.7411924
8. Bulgakov, A., Evgenov, A. and Weller, C. (2015) Automation of 3D
Building Model Generation Using Quadrotor. Procedia Engineering,
123, 101-109. https://doi.org/10.1016/j.proeng.2015.10.065
9. Hartley, R. and Zisserman, A. (2004) Multiple View Geometry
in Computer Vision. 2nd Edition, Cambridge University Press,
342 Unmanned Aerial Vehicles (UAV) and Drones
Cambridge. https://doi.org/10.1017/CBO9780511811685
10. McGlone, J.C. (2013) Manual of Photogrammetry. Sixth Edition,
American Society for Photogrammetry and Remote Sensing, Bethesda,
Maryland, USA, 1318 p.
11. Lowe, D.G. (1999) Object Recognition from Local Scale-Invariant
Features. Proceedings of the International Conference on Computer
Vision, Kerkyra, 20-27 September 1999, Vol. 2, 1150-1157. https://doi.
org/10.1109/ICCV.1999.790410
12. Fischler, M. and Bolles, R. (1981) Random Sample Consensus: A
Paradigm for Model Fitting with Applications to Image Analysis and
Automated Cartography. Communications of the ACM, 24, 381-395.
https://doi.org/10.1145/358669.358692
13. Maenpaa, T. and Pietikainen, M. (2005) Texture Analysis with Local
Binary Patterns. In: Handbook of Pattern Recognition and Computer
Vision, 3rd Edition, World Scientific, Singapore, 197-216. https://doi.
org/10.1142/9789812775320_0011
14. http://www.rslab.ru/pubdir/1.wmv
CHAPTER
16
Assessment of UAV Based Vegetation
Indices for Nitrogen Concentration
Estimation in Spring Wheat
Research and Extension Center, University of Idaho, Idaho Falls, ID, USA.
3
BASF-Chemical Co., Caldwell, ID, USA.
4
Take Flight UAS, LLC, Boise, ID, USA.
5
Seminis, Payette, ID, USA.
6
Vallivue High School, Caldwell, ID, USA.
ABSTRACT
Unmanned Aerial Vehicles (UAVs) have become increasingly popular in
recent years for agricultural research. High spatial and temporal resolution
images obtained with UAVs are ideal for many applications in agriculture.
The objective of this study was to evaluate the performance of vegetation
indices (VIs) derived from UAV images for quantification of plant nitrogen
(N) concentration of spring wheat, a major cereal crop worldwide. This study
was conducted at three locations in Idaho, United States. A quadcopter UAV
equipped with a red edge multispectral sensor was used to collect images
during the 2016 growing season.
Flight missions were successfully carried out at Feekes 5 and Feekes
10 growth stages of spring wheat. Plant samples were collected on the
same days as UAV image data acquisition and were transferred to lab for
N concentration analysis. Different VIs including Normalized Difference
Vegetative Index (NDVI), Red Edge Normalized Difference Vegetation
Index (NDVIred edge), Enhanced Vegetation Index 2 (EVI2), Red Edge Simple
Ratio (SRred edge), Green Chlorophyll Index (CIgreen), Red Edge Chlorophyll
Index (CIred edge), Medium Resolution Imaging Spectrometer (MERIS)
Terrestrial Chlorophyll Index (MTCI) and Red Edge Triangular Vegetation
Index (core only) (RTVIcore) were calculated for each flight event.
At Feekes 5 growth stage, red edge and green based VIs showed higher
correlation with plant N concentration compare to the red based VIs.
At Feekes 10 growth stage, all calculated VIs showed high correlation
with plant N concentration. Empirical relationships between VIs and plant
N concentration were cross validated using test data sets for each growth
stage. At Feekes 5, the plant N concentration estimated based on NDVIred
edge
showed one to one correlation with measured N concentration. At Feekes
10, the estimated and measured N concentration were highly correlated for
all empirical models, but the model based on CIgreen was the only model
that had a one to one correlation between estimated and measured plant N
concentration.
The observed high correlations between VIs derived from UAV and the
plant N concentration suggests the significance of VIs deriving from UAVs
for within-season N concentration monitoring of agricultural crops such as
spring wheat.
Keywords: Unmanned Aerial Vehicles and Systems (UAV), Vegetation
Indices (VIs), Plant Nitrogen Concentration
Assessment of UAV Based Vegetation Indices for Nitrogen ... 345
INTRODUCTION
Nitrogen (N) is one of the essential factors for crop production in terms of
plant growth and development and crop quality [1] [2]. Adequate supply
of N is fundamental for optimizing wheat (Triticum aestivum L.) yield and
grain quality [3] [4]. Nitrogen regulates plant growth processes and plays a
vital role in chlorophyll (CL) production―the basis for the photosynthesis
process [5]. Insufficient N supply can negatively affect photosynthesis
process and result in crop yield and quality penalties [3]. On the other
hand, excessive N application to agricultural crops has been associated with
nitrate leaching, soil denitrification, ammonia volatilization, and nitrous
oxide contamination of aquifers and aggravating the climate change [6]
[7]. Dynamic and efficient fertilization (appropriate time and rate) is very
important for optimizing crop yield and maintaining environmental quality
[8]. Accurate estimation of crop N concentration is vital for developing
effective fertilizer-N recommendations.
There is a strong correlation between N concentration and CL content
at foliar and canopy scale because most of leaf N is localized within the
CL molecules [9] [10] [11] [12]. Chlorophyll content is the main elements
that govern the crop reflectance in the visible (VIS) and near infrared
(NIR) regions of spectrum [8]. Thus, vegetation reflectance in these parts
of spectrum is closely associated with N concentration. Remote sensing
enables to acquire crop reflectance and provide diagnostic information on
crop N concentration quickly and in a spatial context, compared to traditional
destructive sampling techniques [13]. During the last few decades, scientists
have proposed several vegetation indices (VIs) calculated from reflectance
data to assess CL content and N concentration [8] [13] [14] [15]. These
VIs are mostly a combination of NIR and VIS spectral bands, representing
radiation scattering by canopy and radiation absorption by CL respectively
[16]. Although these VIs accurately estimate CL and N concentration early
in the growing season at lower CL values, they become less sensitive as
the red spectral band is strongly absorbed by CL. Gitelson and Merzlyak
[17] showed that red edge region is sensitive to a wide range of CL content
values, and the use of this part of spectrum in VIs calculation can reduce
the saturation effect due to lower absorption of the red edge region by CL.
Several VIs based on this spectral region have been developed and used
successfully to estimate CL and N concentration.
Gitelson and Merzlyak [17] replaced the red spectral band (675 nm)
with red edge spectral band (705 nm) in Normalized Difference Vegetation
346 Unmanned Aerial Vehicles (UAV) and Drones
Index (NDVI) and developed a new index called Red Edge Normalized
Difference Vegetation Index (NDVIred edge). They showed that traditional
NDVI had a tendency to become saturated at higher CL level of senescing
maple and chestnut leaves while NDVIred edge continued to show strong linear
correlation with CL content and observed no saturation issue. In a similar
study, Gitelson et al. [18] showed that reciprocal of red edge spectral band
is closely related to the CL content in leaves of all species. They proposed
Red Edge Chlorophyll Index (CIred edge) and showed that CIred edge is highly
correlated with CL content (coefficient of determination R2 > 0.94). In another
study, Dash and Curran [19] developed MERIS Terrestrial Chlorophyll
Index (MTCI). They used data in three red/NIR wavebands centered at
681.25, 708.75 and 753.75 nm (bands 8, 9 and 10 in the MERIS standard
band setting) to develop MTCI. They determined the relationship between
MTCI and CL content using actual CL content for sites in the New Forest,
United Kingdom (UK) and for small plots in a greenhouse experiment.
Their results showed that MTCI is strongly and positively correlated to
actual CL content. Li et al. [5] evaluated red edge based spectral indices
for estimating plant N concentration and uptake of maize (Zea mays L.).
They calculated chlorophyll content index (CCCI), NDVIred edge, CIred edge and
MTCI from hyperspectral narrow bands, simulated Crop Circle ACS-470
active crop canopy sensor bands and simulated WorldView-2 satellite
broad bands. Their results showed that there is a positive strong correlation
between red edge based VIs and N concentration in maize. Their results
also indicated that CCCI performed the best across different bandwidths for
estimating maize plant N concentration at the V6 and V7 and V10 - V12
stages. In another study, Wang et al. [20] compared broad-band and red edge
based spectral VIs to estimate N concentration in corn, wheat, rice (Oryza
sativa L.) and soybeans (Glycine max L.). They calculated various VIs from
images acquired by the Compact Airborne Spectrographic Imager (CASI)
sensor. Their result showed that NDVI performed the best compared to other
VIs, and red edge based VIs did not show potential for accurate estimation
of leaf N concentration data due to spectral resolution.
Unmanned aerial vehicles (UAVs) are remote sensing systems that can
capture crop reflectance in the VIS-NIR region of spectrum and assess CL
and N concentration. The UAVs, which have recently gained tractions in
number of studies, acquire ultra-high spatial resolution images by flying at
low altitudes [21] [22]. Operational advantages such as low-cost systems,
high flexibility in terms of flight planning and acquisition scheduling, and
imaging below cloud cover make UAVs an appropriate tool to study crop
Assessment of UAV Based Vegetation Indices for Nitrogen ... 347
Study Area
The experimental studies were conducted at five different locations in Idaho
during 2016 growing season (Table 1 and Figure 1). The soil type, mean
annual temperature, and mean annual precipitation for each study site are
presented in Table 1.
Table 1. Latitude, longitude, soil type, mean annual precipitation, and mean
annual temperature for five locations in Idaho.).
168, 252, and 336 kg N ha−1). Each treatment was replicated four times in
a randomized complete block design, resulting in a total of 20 plots at each
location. Spring planting conditions were good for crop establishment. Soil
moisture March through April were above average, which resulted in excellent
early season growth and development. Early season precipitation provided
excellent growing conditions until irrigation became available in April when
all sites were irrigated every 7 to 10 days using sprinkle irrigation systems.
Timely planting dates resulted in excellent tillering and a long growing period.
[28]. Then, the mosaicked images were radiometrically calibrated using the
Red Edge Camera Radiometric Calibration Model in Atlas software. Atlas
software uses the calibration curve associated with a Calibrated Reflectance
Panel (CRP) to perform calibration model and convert the raw pixel values
of an image into absolute spectral radiance values. The CRP was placed
adjacent to the study area during each flight mission, and an image of the
CRP was captured immediately before and immediately after each flight.
The output of radiometric calibration model is a 5-layer, 16-bit ortho-
rectified GeoTIFF image.
Vegetation Indices
The UAV reflectance data were used for calculating eight VIs, many
of which have been proposed as surrogates for canopy N concentration
estimation. The VIs tested include the Normalized Difference Vegetation
Index, NDVI [30] , the Red Edge Normalized Difference Vegetation Index,
NDVIred edge [17] , the Enhanced Vegetation Index 2, EVI2 [31] , the Red Edge
Simple Ratio, SRred edge [32] , the Green and Red Edge Chlorophyll Indices,
CIgreen and CIred edge, respectively [18] , the MERIS Terrestrial Chlorophyll
Index, MTCI [19] , and the Core Red Edge Triangular Vegetation Index
(RTVIcore) [33] (Table 2). For each study plot, a region of interest (ROI) was
manually established by choosing the central two rows and mean of each VI
value corresponding to that plot was extracted.
Statistical Analysis
The study plots were randomly divided into test and training data sets. For
the training data sets, simple regression analysis was performed to find the
best relationship fit between N concentration and each UAV based VI. The
determination coefficient (R2) and Root Mean Squared Error (RMSE) were
352 Unmanned Aerial Vehicles (UAV) and Drones
used to evaluate the predictive accuracy of each model. These parameters are
widely used to evaluate the performance of empirical models. The RMSE
are computed as shown in Equations (1):
(1)
where, yˆi is predicted value of N concentration; yi is measured N
concentration, and M is total number of observations. In the next step, the
test data set was used to evaluate the performance of developed model
in the previous step. Predicted values of N concentration were plotted
versus corresponding values of N concentration measured in the lab. The
performance of regression models in estimating N for the training data set
were evaluated by calculating the R2 and
Table 2. Vegetation indices (VIs) tested in this study to estimate nitrogen (N)
content.
Means within each column followed by the same letter are not
significantly different at p < 0.01, as determined by the Duncan’s multiple
range test.
Means within each column followed by the same letter are not
significantly different at p < 0.01, as determined by the Duncan’s multiple
range test.
only VIs which their best relationship fit with N concentration were linear.
At Feekes 10 (Figure 4), the R2 of the developed models ranged from 0.82
to 0.88 and RMSE ranged from 0.06 to 0.08. At this growth stage, the largest
R2 between VIs and N concentrations again was obtained for CIgreen while
the developed model based on NDVIred edge had the lowest RMSE with N
concentration. The best relationship fit between N concentration and all VIs
(except EVI2) at Feekes 10 were quadratic (Figure 4). EVI2 was the only VIs
for which its’ best relationship fit with N concentration was linear.
All UAV based VIs used in this study showed strong positive correlations
with N concentration. In other words, all UAV based VIs used in this
study were good indicators of spring wheat plant N concentration for both
Feekes 5 and Feekes 10 growth stages. At Feekes 5 (Figure 3), red radiation
based UAV indices (NDVI and EVI2) had lower R2 and higher RMSE as
compared with other green and red edge based UAV indices. At this stage
(lower fractional vegetation cover), soil background influence on research
plots’ reflectance could be strong and could negatively affect the red based
VIs accuracy [37]. Red edge based VIs could minimize the soil reflectance
and isolate crop signal from soil reflectance as a function of canopy cover
changes [13]. This suggests that applying red edge based or green based VIs
from UAVs data can improve plant N concentration prediction compared
to the red based UAV VIs at Feekes 5 of wheat growth stage. At this stage,
CIgreen showed the highest R2 and lowest RMSE which suggests that the
green based VIs can be a better indicator of plant N concentration than the
red edge based VIs. This result is in consistent with the result of the previous
study conducted by Li [38] who showed that red edge based VIs were more
effective for N estimation at earlier growth stage of wheat. In addition,
CIgreen showed linear relationship with plant N concentration, which means
sensitivity of the model did not change due to the wide range of variation in
plant N concentration; it is straightforward to invert them between CIgreen and
plant N concentration to obtain a synoptic measure of N concentration.
All the red edge based VIs showed comparable performance with similar
R2 and RMSE values at Feekes 5. At Feekes 10 growth stage (Figure 4), all
UAV based VIs used in the study performed very well with similar R2 and
RMSE values. At this stage, the performance of NDVI and EVI2 for plant N
concentration improved. All other UAV based VIs had similar performance
compare to Feekes 5. Similar results have been reported by previous studies
as well [38]. At Feekes 10, the crop canopy had fully developed, and soil
background effect on research plots’ reflectance had been reduced, so red
based and red edge based VIs showed similar performance.
356 Unmanned Aerial Vehicles (UAV) and Drones
Figure 3. Relationships between measured plant N content (%) vs. (a) NDVI,
(b) NDVIred edge, (c) EVI2, (d) SRred edge, (e) MTCI, (f) CIgreen, (g) CIred edge and (h)
RTVIcore at Feekes 5 growth stage.
Assessment of UAV Based Vegetation Indices for Nitrogen ... 357
Figure 4. Relationships between measured plant N content (%) vs. (a) NDVI,
(b) NDVIred edge, (c) EVI2, (d) SRred edge, (e) MTCI, (f) CIgreen, (g) CIred edge and
(h) RTVIcore at Feekes 10 growth stage.
The performance of the developed models in the previous step were
evaluated using test data set for Feekes 5 and Feekes 10, separately. For this
purpose, we used the developed models in previous step to estimate the N
concentration. Table 5 and Table 6 show the results of comparison between
358 Unmanned Aerial Vehicles (UAV) and Drones
Table 5. The results of algorithm cross validation for estimating plant N con-
centration (N%) at Feekes 5 growth stage. Best fit functions, determination co-
efficients (R2) and root mean square errors (RMSE) of plant N concentration
estimation are given for eight vegetation indices.
Table 6. The results of algorithm cross validation for estimating plant N con-
centration (N%) at Feekes 10 growth stage. Best fit functions, determination
coefficients (R2) and root mean square errors (RMSE) of plant N concentration
estimation are given for eight vegetation indices.
Assessment of UAV Based Vegetation Indices for Nitrogen ... 359
VIs NDVI NDVIred edge EVI2 SRred edge CIgreen CI- MTCI RTVIcore
red edge
VIs NDVI ND- EVI2 SRred edge CIgreen CIred edge MTCI RTVIcore
VIred edge
Slop −2.9701 −2.9269 −2.7 −3.2624 −1.1572 −2.876 −2.6159 −2.4721
852
In- 2.8194 3.1016 2.2 3.3815 0.6554 2.809 2.5192 3.1225
ter- 996
cept
not have strong effect on the reflectance of research plots. At Feekes 5, the
plant N concentration estimated based on NDVIred edge showed 1:1 correlation
with N concentration measured in the lab. At Feekes 10, the estimated and
measured N concentration were highly correlated for all developed models,
but the model based on CIgreen was the only model that had a 1:1 correlation
between estimated and measured plant N concentration. The observed high
correlation between UAV based VIs with plant N concentration indicates the
applicability of UAV for in-season data collection from agricultural fields.
CONCLUSIONS
Remotely sensed VIs have been extensively used to quantify wheat crop N status.
The UAV technology appears to provide a good complement to the current remote
sensing platforms for N monitoring in wheat by capturing low-cost, high resolution
images. These UAV technologies can bring a unique perspective to N management in
wheat by providing valuable information on wheat N status. Time, labor and money
can be saved using UAV data in crop monitoring.Results presented in this paper
show that high resolution images acquired with UAVs are a useful data source for in-
season wheat crop N concentration estimation. At Feekes 5 growth stage, red edge
and green based VIs had higher correlation with plant N concentration compared
to red based VIs because red edge based VIs can reduce the soil background effect
on crop reflectance. At Feekes 10 growth stage, all calculated VIs showed high
correlation with plant N concentration, and there were no significant differences
between red and red edge based VIs’ performance. At this stage, crop canopy
has been fully developed, and soil reflectance did not have strong effect on the
reflectance of research plots. At Feekes 5, the plant N concentration estimated based
on NDVIred edge showed 1:1 correlation with N concentration measured in the lab. At
Feekes 10, the estimated and measured N concentration were highly correlated for all
developed models, but the model based on CIgreen was the only model that had a 1:1
correlation between estimated and measured plant N concentration. The observed
high correlation between UAV based VIs with plant N concentration indicates the
applicability of UAV for in-season data collection from agricultural fields.
ACKNOWLEDGEMENTS
This study was supported in part by the Idaho Wheat Commission.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest.
Assessment of UAV Based Vegetation Indices for Nitrogen ... 363
REFERENCES
1. Biswas, D.K. and Ma, B.L. (2016) Effect of Nitrogen Rate and
Fertilizer Nitrogen Source on Physiology, Yield, Grain Quality, and
Nitrogen Use Efficiency in Corn. Canadian Journal of Plant Science,
96, 392-403. https://doi.org/10.1139/cjps-2015-0186
2. Jain, N., Ray, S.S., Singh, J.P. and Panigrahy, S. (2007) Use of
Hyperspectral Data to Assess the Effects of Different Nitrogen
Applications on a Potato Crop. Precision Agriculture, 8, 225-239.
https://doi.org/10.1007/s11119-007-9042-0
3. Walsh, O.S., Shafian, S. and Christiaens, R.J. (2018) Evaluation of
Sensor-Based Nitrogen Rates and Sources in Wheat. International
Journal of Agronomy, 2018, Article ID: 5670479. https://doi.
org/10.1155/2018/5670479
4. Ramoelo, A., Skidmore, A.K., Schlerf, M., Heitkönig, I.M., Mathieu,
R. and Cho, M.A. (2013) Savanna Grass Nitrogen to Phosphorous
Ratio Estimation Using Field Spectroscopy and the Potential for
Estimation with Imaging Spectroscopy. International Journal of
Applied Earth Observation and Geoinformation, 23, 334-343. https://
doi.org/10.1016/j.jag.2012.10.009
5. Li, F., Mistele, B., Hu, Y., Chen, X. and Schmidhalter, U. (2014)
Reflectance Estima- tion of Canopy Nitrogen Content in Winter Wheat
Using Optimised Hyperspectral Spectral Indices and Partial Least
Squares Regression. European Journal of Agronomy, 52, 198-209.
https://doi.org/10.1016/j.eja.2013.09.006
6. Reay, D.S., Davidson, E.A., Smith, K.A., Smith, P., Melillo, J.M.,
Dentener, F. and Crutzen, P.J. (2012) Global Agriculture and Nitrous
Oxide Emissions. Nature Climate Change, 2, 410. https://doi.
org/10.1038/nclimate1458
7. Ravishankara, A.R., Daniel, J.S. and Portmann, R.W. (2009) Nitrous
Oxide (N2O): The Dominant Ozone-Depleting Substance Emitted in
the 21st Century. Science, 326, 123-125.
8. Bao, Y., Xu, K., Min, J. and Xu, J. (2013) Estimating Wheat Shoot
Nitrogen Content at Vegetative Stage from in Situ Hyperspectral
Measurements. Crop Science, 53, 2063-2071. https://doi.org/10.2135/
cropsci2013.01.0012
9. Schlemmer, M., Gitelson, A., Schepers, J., Ferguson, R., Peng,
Y., Shanahan, J. and Rundquist, D. (2013) Remote Estimation of
364 Unmanned Aerial Vehicles (UAV) and Drones
https://doi.org/10.1016/S0176-1617(11)81633-0
18. Gitelson, A.A., Gritz, Y. and Merzlyak, M.N. (2003) Relationships
between Leaf Chlorophyll Content and Spectral Reflectance and
Algorithms for Non-Destructive Chlorophyll Assessment in Higher
Plant Leaves. Journal of Plant Physiology, 160, 271-282. https://doi.
org/10.1078/0176-1617-00887
19. Dash, J. and Curran, P.J. (2004) The MERIS Terrestrial Chlorophyll
Index.
20. Wang, Y., Liao, Q., Yang, G., Feng, H., Yang, X. and Yue, J. (2016)
Comparing Broad-Band and Red Edge-Based Spectral Vegetation
Indices to Estimate Nitrogen Concentration of Crops Using Casi
Data. International Archives of the Photogrammetry, Remote Sensing
and Spatial Information Sciences, 137-143. https://doi.org/10.5194/
isprsarchives-XLI-B7-137-2016
21. Maes, W.H., Huete, A.R. and Steppe, K. (2017) Optimizing the
Processing of UAV-Based Thermal Imagery. Remote Sensing, 9, 476.
https://doi.org/10.3390/rs9050476
22. Pádua, L., Vanko, J., Hruška, J., Adão, T., Sousa, J.J., Peres, E. and
Morais, R. (2017) UAS, Sensors, and Data Processing in Agroforestry:
A Review towards Practical Applications. International Journal of
Remote Sensing, 38, 2349-2391. https://doi.org/10.1080/01431161.2
017.1297548
23. Simelli, I. and Tsagaris, A. (2015) The Use of Unmanned Aerial
Systems (UAS) in Agriculture. HAICTA, Kavala, 17-20 September
2015, 730-736.
24. Lu, J., Miao, Y., Huang, Y., Shi, W., Hu, X., Wang, X. and Wan, J.
(2015) Evaluating an Unmanned Aerial Vehicle-Based Remote Sensing
System for Estimation of Rice Nitrogen Status. 4th International
Conference on Agro-Geoinformatics, Istanbul, 20-24 July 2015, 198-
203.
25. Caturegli, L., Corniglia, M., Gaetani, M., Grossi, N., Magni, S.,
Migliazzi, M., Raffaelli, M., et al. (2016) Unmanned Aerial Vehicle
to Estimate Nitrogen Status of Turfgrasses. PLoS ONE, 11, e0158268.
https://doi.org/10.1371/journal.pone.0158268
26. Hunt, E.R. and Rondon, S.I. (2017) Detection of Potato Beetle Damage
Using Remote Sensing from Small Unmanned Aircraft Systems.
Journal of Applied Remote Sensing, 11, Article ID: 026013. https://
366 Unmanned Aerial Vehicles (UAV) and Drones
doi.org/10.1117/1.JRS.11.026013
27. Mission Planner Home. http://ardupilot.org/planner
28. Shi, Y., Thomasson, J.A., Murray, S.C., Pugh, N.A., Rooney, W.L.,
Shafian, S., Rana, A., et al. (2016) Unmanned Aerial Vehicles for High-
Throughput Phenotyping and Agronomic Research. PLoS ONE, 11,
e0159781. https://doi.org/10.1371/journal.pone.0159781
29. Official Methods of Analysis of AOAC International. http://www.aoac.
org/aoac_prod_imis/aoac/publications/official_methods_of_analys
is/aoac_member/pubs/oma/aoac_official_methods_of_analysis.
aspx?hkey=5142c47 8-ab50-4856-8939-a7a491756f48
30. Rouse Jr., J., Haas, R.H., Schell, J.A. and Deering, D.W. (1974)
Monitoring Vegetation Systems in the Great Plains with ERTS.
31. Huete, A., Didan, K., Miura, T., Rodriguez, E.P., Gao, X. and Ferreira,
L.G. (2002) Overview of the Radiometric and Biophysical Performance
of the MODIS Vegetation Indices. Remote Sensing of Environment,
83, 195-213. https://doi.org/10.1016/S0034-4257(02)00096-2
32. Gitelson, A.A., Vina, A., Ciganda, V., Rundquist, D.C. and Arkebauer,
T.J. (2005) Remote Estimation of Canopy Chlorophyll Content
in Crops. Geophysical Research Letters, 32, L08403. https://doi.
org/10.1029/2005GL022688
33. Nicolas, T., Philippe, V. and Huang, W.J. (2010) New Index for
Crop Canopy Fresh Biomass Estimation. Spectroscopy and Spectral
Analysis, 30, 512-517.
34. Plénet, D. and Lemaire, G. (1999) Relationships between Dynamics
of Nitrogen Uptake and Dry Matter Accumulation in Maize Crops.
Determination of Critical N Concentration. Plant and Soil, 216, 65-82.
https://doi.org/10.1023/A:1004783431055
35. Muñoz-Huerta, R.F., Guevara-Gonzalez, R.G., Contreras-Medina,
L.M., TorresPacheco, I., Prado-Olivarez, J. and Ocampo-Velazquez,
R.V. (2013) A Review of Methods for Sensing the Nitrogen Status in
Plants: Advantages, Disadvantages and Recent Advances. Sensors, 13,
10823-10843. https://doi.org/10.3390/s130810823
36. Lü, X.T., Dijkstra, F.A., Kong, D.L., Wang, Z.W. and Han, X.G. (2014)
Plant Nitrogen Uptake Drives Responses of Productivity to Nitrogen
and Water Addition in a Grassland. Scientific Reports, 4, Article No.
4817. https://doi.org/10.1038/srep04817
37. Viña, A., Gitelson, A.A., Nguy-Robertson, A.L. and Peng, Y. (2011)
Assessment of UAV Based Vegetation Indices for Nitrogen ... 367
17
Research and Teaching Applications of
Remote Sensing Integrated with GIS:
Examples from the Field
CA, USA.
2
GeoAcuity, Los Angeles, CA, USA.
Institute for Creative Technologies, University of Southern California, Los
3
ABSTRACT
Remote sensing is used in the Spatial Sciences Institute (SSI) across the
full spectrum of the organization’s teaching and research initiatives. From
INTRODUCTION
Within spatial sciences, the University of Southern California (USC)
Dornsife College of Letters, Arts and Sciences (Dornsife) Spatial Sciences
Institute (SSI) recognizes the importance of remotely sensed data as an
integral component of a Geographic Information System (GIS) and spatial
analysis to a variety of disciplines. As is evident from this special issue alone,
remotely sensed data integrated into a GIS can be applied to everything from
environmental analysis to human rights monitoring in far to reach locales.
We focus this paper on two research projects contributed by faculty and
affiliate faculty that incorporate remotely sensed data, acquired both from
satellite imagery and unmanned aerial systems, into GIS and virtual reality/
augmented reality (VR/AR) to promote the well being of military and the
monitoring of wildlife. These vastly differing projects underscore the variety
of applications for remotely sensed data and different research that the
Research and Teaching Applications of Remote Sensing ... 371
Spatial Sciences Institute undertakes, which also can be used in teaching and
promoting the next generation of data acquisition and analysis specialists.
Reinforcing concepts and scientific theories is best accomplished through
active learning, when the creation of new knowledge occurs through the
transformation of experience [1] [2]. We discuss these projects not only
within the context of the integration of remote sensing with GIS, but also
the broader context of integrating remote sensing into curricular advances
within the SSI at both graduate and undergraduate levels. Situated in the
heart of Los Angeles, CA, the online graduate programs in Geographic
Information Sciences & Technology (GIST) at USC are currently the only
US programs that conform to UNIGIS standards of design and delivery
for distance learning in GIS and GISci. This places USC Dornsife SSI in a
unique position at the forefront of curricular development.
Below, we briefly describe some of the work that Director of Modeling,
Simulation, & Training at the Institute for Creative Technologies (ICT),
Ryan McAlinden oversees, and work that Professors Jason Knowles and
Andrew Marx have initiated with the Catalina Island Conservancy (CIC),
a non-profit organization that privately holds and manages over 88% of the
land on Catalina Island. The USC-ICT research utilizes novel geospatial
techniques and advances in the areas of collection, processing, storage and
distribution of geospatial data, while the pilot for wildlife monitoring of
local bison and deer populations via UAS-based high resolution visible
(RGB) and thermal imaging has not previously been conducted on Catalina
Island. We also describe how projects such as these have been incorporated
into specific spatial sciences courses over the years to provide support
for research and platforms for enhanced student learning. Aspects of the
pedagogical approaches were previously presented at the GIS-Pro & Cal-GIS
2018 conference [3], therefore, here we focus on newer research and faculty
capacity building. We also provide the workflow that we have undertaken to
increase the faculty capacity in operating UAS and working with remotely
sensed data. We acknowledge this may not work for all academic units, but
encourage academic departments to follow similar curricular advances.
The next section of this paper presents background, methodology, and
results of the wildlife monitoring pilot study on Catalina Island from 2018.
Section three describes the innovative research and application of remote
sensing integrated with GIS for AR/VR products and 3D planetary modeling
done at USC-ICT. Both research studies are presented within the context of
a Dornsife Spatial Sciences Institute graduate spatial data acquisition course
and the potential for building faculty capacity in integrating remote sensing
372 Unmanned Aerial Vehicles (UAV) and Drones
with GIS in this course. The fourth section focuses on novel integration
of remote sensing into the undergraduate and graduate curriculum within
spatial sciences and presents a potential workflow for other organizations to
build faculty capacity in this domain.
Background
The Dornsife Spatial Sciences Institute has long run an online spatial
data acquisition course that affords students the opportunity to experience
a weeklong field data excursion based at USC Wrigley Institute for
Environmental Studies (WIES) on Catalina Island. This course has evolved
from working solely with handheld GPS units to formally include unmanned
platforms. This evolution was bolstered when trained Remote License
Pilots (RPL) pilots, Professors Jason Knowles and Andrew Marx, joined the
faculty of SSI in late 2017. Additionally, spatial students and faculty have
intermittently previously worked with the Catalina Island Conservancy on
a variety of projects, mostly small-scale projects focused on areas near the
USC WIES campus or just beyond, during the week-long excursion, with
results remaining internal.
Having practical experience from previous work, both Knowles and
Marx were eager to continue their remote sensing work through SSI; the
already forged contacts at the CIC and the spatial data acquisition course
provided the opportunity to formally link their research with the student
curriculum and vice versa, to link the students with this practical application
of remote sensing. In April 2018, Knowles and Marx worked with the
CIC to develop and execute a pilot study for conducting wildlife surveys
of local bison and deer populations utilizing UAS-based high resolution
visible (RGB) and thermal imaging. The study looked at both the feasibility
of utilizing fixed wing UAS-based imagery to identify wildlife, but also at
workflow maximization; could gains be made in the efficiency and efficacy
of airborne counting and cataloging of wildlife in comparison to more
traditional on the ground field survey methods [4]. In addition, it was also
anticipated that this methodology would be less obtrusive and invasive to
the observed species due to the collection altitude of the UAS. This is a vital
component to wildlife monitoring and an especially high priority for ethical
organizations such as the CIC, which are the sole guardians of wildlife in
Research and Teaching Applications of Remote Sensing ... 373
an area. The pilot study was done flying a fixed wing sense Fly Ebee Plus
with SODA (RGB) and Thermo MAP payloads (Figure 1) resulting in both
ultrahigh resolution aerial photography (at 1.12 in/2.84cm group sample
distance) and thermal data capture (at 9.49 in/24.11cm ground sample
distance). The CIC Director and two field biologists joined the collection
team for the duration of the project.
Methodology
The study area selected by the CIC was Middle Ranch Meadows on Catalina
Island (Figure 2), located near the center of the island in a valley surrounded
by rich topography making on the ground field visual observations difficult.
Two days of flying were completed on April 4 and 5, with both high-resolution
RGB and thermal cameras for a total of five flights. Both pilots (Knowles and
Marx) were holders of Civil Certificate of Waiver Authorization (COA), for
commercial operations via Federal Aviation Administration (FAA) Code of
Federal Regulations (14CFR) Part 107. The April 4 flights were for orientation
and equipment shakedown/calibration, while April 5 flights were for wildlife
data capture. On April 5, the first flight was completed with the thermal
package, taking off 30-minutes before sunrise (FAA earliest allowed). This
early flight was conducted to maximize the temperature differential between
the cold evening ground and the wildlife. Immediately after the first, one-
hour flight, a second flight was performed with a RGB sensor over the same
study area. This, along with ground truth performed by the CIC personnel
using binoculars from a vantage point to identify wildlife, was used to the
confirm the presence of wildlife and validate potential signatures identified in
the thermal imagery capture which is shown to be the most effective [5] [6].
Immediately following the flights, datasets were preprocessed in the field and
saved to secondary backup systems. Once back from the field, datasets were
processed overnight via commercial photogrammetry software (Pix4DMapper
v4.2) and the following datasets were produces within 48-hours of capture and
integrated within GIS (ArcMap v10.5):
• Ultra-high resolution RGB (visible) aerial photography (at 1.12
in/2.84cm ground sample distance);
• LAS Point cloud (Figure 3);
• Thermal surface model (at 9.49 in/24.11cm ground sample
distance);
• Digital surface model (DSM); and
• 3D Textured mesh (Figure 4).
374 Unmanned Aerial Vehicles (UAV) and Drones
After processing the thermal imagery, it was added to a GIS, for manual
analysis of the thermal imagery. Warm areas or literal “hotspots” from the
thermal imagery capture were identified and correlated to the visible imagery
and on the ground wildlife observations from the CIC field biologists.
Figure 2. Study area: flight coverage at Middle Ranch, Catalina Island, CA and
Catalina Island with Califronia caostline.
Results
The preliminary results indicate that fixed wing UA appears to be a good
platform for wildlife surveys both for its ability to cover large areas and
host both RGB and thermal payloads. Initial surveys were a success with
both bison and mule deer identified in the thermal imagery captured by the
UAS survey (Figure 5 and Figure 6). Four mule deer and one bison were
ultimately counted over the 1.41 km2 study area.
The field crew and the RGB imagery flown immediately following the
thermal capture verified these signatures. The aerial collection methodology
was unquestionably more efficient in terms of being able to cover more
area (the eBee fixed wing has a flight time in excess of ~60-minutes and
depending on the collection altitude can cover vast areas in a single flight)
and visual observations of the wildlife during the flights indicated that there
was no disturbance, with the animals seemingly unaware of the UAS high
above them. In addition to the survey data, the derivative geospatial products
produced by the programming process (Figure 4 above) were found to be
extremely useful datasets for the CIC field biologists and GIS staff that
would normally not be available.
376 Unmanned Aerial Vehicles (UAV) and Drones
Summary
While we consider this initial pilot study a success, we believe that there
can further improvements to the workflow and collection methodology.
Collections and resultant thermal data capture would be significantly
improved by flying predawn (or at night) in order to get a larger wildlife
temperature differential signature between the environment and the wildlife.
Even at sunrise, the head distribution was already much greater on east-
facing slopes making detection of thermal signatures more difficult. Future
studies will see the submission of an FAA Waiver to allow for predawn
(or night) flying to maximize the temperature differential. Additionally,
manual analysis of the thermal imagery, while doable, is time consuming
and cumbersome. This process would benefit from the use of a scripted
automation or semi-automated routine for entity detection [7]. A detection
algorithm identify areas within the scenes where there are large temperature
differences would enable the user to more rapidly identify and catalog the
pertinent data from a large coverage area.
Figure 5. Thermal imagery from ThermoMap camera of bison (left) and RGB
imagery (right) with movement of bison from time 0615 to 0715.
Figure 6. Thermal imagery from ThermoMap camera of four mule deer (cir-
cled).
Research and Teaching Applications of Remote Sensing ... 377
Background
The USC-ICT’s Terrain efforts focus on researching and prototyping
capabilities that support a fully geo-reference 3D planetary model for use
in the Army’s next-generation training and simulation environments. USC-
ICT research exploits new techniques and advances in the focus areas of
collection, processing, storage and distribution of geospatial data to various
runtime applications.
Generalized Methodology
USC-ICT collects aerial images with Commercial off the Shelf (COTS)
UAS using the ICT autonomous UAV path planning and imagery collection
system. The software provides a user-friendly interface that encodes
photogrammetry best practices. Unlike other commercially available UAV
remote control software, the ICT solution was design for collecting aerial
images that cover a large area of interest with multiple flights. Parameters
that are required for data collections include a bounding box of the area
of interest, flight altitude, the desired overlap between images, and
camera orientation. An optimized flight path is then computed with these
parameters and the imaging task can be automatically accomplished. With
the acquired images, the 3D point clouds are reconstructed using commercial
photogrammetry software.
The photogrammetric-generated point clouds/meshes (Figure 7) from a
collection stage do not allow both user-level and system-level interaction, as
they do not contain the semantic information to distinguish between objects.
The workflow in previous works require either manually labeling the points
into different parts, or re-training a new model for each new scene. USC-
ICT has designed a fully automated process that utilizes deep learning to
automatically extract relevant features and segmentation from point clouds.
To train the feature attribution model, the points are first manually labeled
378 Unmanned Aerial Vehicles (UAV) and Drones
with the following labels: ground, man-made objects, vegetation, etc. These
point clouds are then adapted to 3D voxel grids to produce a representation
suitable for deep neural networks. ICT designed a simple yet effective 3D
encoding and decoding network architecture based on 3D U-Net for point
cloud segmentation.
During training, a straight and forward 3D data augmentation strategy
was designed to perform rotation, translation, and crop on the input data at
the same time.
This expands the amounts of data and allows for better generalization
capabilities of the model. The resulting pipeline is able to extract building,
ground, and vegetation in the raw point clouds automatically with high
accuracy and produce accurate 3D models (Figure 8).
with other field collected data; we encourage students to work through the
processes to solve issues such as mis-matched coordinate systems upon
projecting both UAS imagery (collected in UTM Zone 11N) with data
from high accuracy receivers (collected in WGS 1984 and projected in Web
Mercator) under guidance from faculty. The students can then related this
practical experience with real world situations in which they will be utilizing
these data and processes.
Additional curricular development culminated in the creation of a non-
major undergraduate course in spatial data collection utilizing drone, in which
students will, for the first time, work the UAS to plan, collect, and process
imagery. This course revolves around applied and active learning experiences
in which students can develop and demonstrate a deeper knowledge and
understanding of the technological sciences behind the UAS-based collections,
processing, and visualizations, and through germane examples. Additionally,
this is a one-off course, meaning students across all disciplines are welcome
to enroll in the course, no prior experience with GIS is required, and this is
not limited to majors or minors. We anticipate that this course likely will draw
students from the most diverse majors and academic disciplines. Lastly, we
have developed a new graduate certificate program in Remotes Sensing for
Earth Observations (RSEO), which leverages remotely sensed data from a
multitude of sources such as Location Based Services (LBS), social media,
and Internet-of-Things (IoT) devices for a variety of applications from weather
and environmental observations to disaster management and recover efforts.
The program focuses on the acquisition, management, and integration of these
data for the purpose of advanced trend analysis, with the aim that students and
professionals develop proficiency working with these data and are able apply
their use in decision-making processes.
Figure 9. Workflow for capacity building and training of faculty for attaining
FAA RPL, Part 107.
Upon successful completion of the exam, all faculty pilots engaged in
physical training with Marx and Knowles. A pre-flight check list, in-flight
protocol, and post-flight image processing workflow were standardized for use
with the current graduate spatial data acquisition course and are available to
use for additional courses, such as the aforementioned undergraduate course
382 Unmanned Aerial Vehicles (UAV) and Drones
that specializes in spatial data collection using drones. Test flights, under
the direction of Marx and Knowles, were conducted in open, unpopulated
parks, in accordance with FAA regulations and recommendations that limit
flights overhead of people in public spaces. Faculty also conducted test
data collections, processing, and integration of outputs (DSM, 3D mesh)
incorporated into a GIS platform to visualize georeferenced data to hone
integration abilities.
DISCUSSION
Ideally, students will develop projects that test not only the utility of
remotely sensed imagery collected via UAS, but also efficiency and efficacy
of their workflows. The weeklong Catalina experience can be used as a
testing ground for implementation of this technology to new domains on
a small scale, prior to large-scale implementation. Our goal is to provide
students the opportunities to engage in active experimentation and improve
learning experience and outcomes. This is accomplished through active
collection of data, detailed data analysis, and product production via some
geo-visualization outcome.
Having experienced faculty that can train additional faculty, and act as a
resource throughout development of the programs and/or new student projects
and novel flight paths, is key to the success of our programs. Successful pilot
studies that apply remote sensing to wildlife tracking and monitoring, such
as the first study highlighted above, and 3D models of the natural and built
environment that are derived from UAS collected imagery and an automated
computing process, such as accomplished by USC-ICT, are exemplars of the
advantages of integrating remote sensing in a GIS. Additionally, faculty that
can effectively communicate and demonstrate the possibilities of remotely
sensed data acquired via UAS, are vital to improved student experiences and
outcomes. SSI does not aim to train pilots, and in fact some of the students
may already have experience working with UAS and product visualization
through current jobs. Rather, courses and programs focus on the utility of
remotely sensed data within a GIS, the science of photogrammetry and
production of geo-referenced 3D models, and the variety of geospatial
analyses that can be run for the diverse applications of remotely sensed data.
Lastly, while we have focused the case studies presented here on remotely
sensed data mainly collected via unmanned aerial systems, our students
interact with a variety of remotely sensed data (LiDAR, multi- and hyper-
spectral satellite imagery, etc) within a GIS during these courses and in the
Research and Teaching Applications of Remote Sensing ... 383
progress of research projects that span the humanities and physical sciences.
Through this work, SSI reinforces that these educational opportunities,
focused primarily on data acquisition, are instrumental in supporting the
application of geographic information systems, science, and technology in
many diverse fields ranging from human security and humanitarian relief, to
sustainable urban and rural planning and public health.
CONCLUSION
We have presented two innovative research projects that integrate remotely
sensed data collected via UAS in GIS for distinct purposes, but that share
the common element of further integration of remote sensing into the
curriculum of spatial sciences courses. We successfully demonstrated the
potential for wildlife monitoring on Catalina Island utilizing UAS and
made tangible recommendations the future work in this domain. We also
presented the work of USC-ICT and development of an automated process
to build a fully geo-referenced 3D model of the earth for the training and
simulation needs as the impetus and model for faculty development at SSI.
In order to achieve the curricular renovations referenced above, faculty must
be properly equipped to guide student data acquisition, processing, and
analysis; we presented the workflow of how the Spatial Sciences Institute
achieved this and the importance for our student development.
ACKNOWLEDGEMENTS
We would like to thank the Catalina Island Conservancy for engaging with
the pilot study described here and permitting the public dissemination of the
results. We also thank the staff of the Wrigley Institute for Environmental
Studies on Catalina Island for their continue support of the field course.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest regarding the publication of this
paper.
384 Unmanned Aerial Vehicles (UAV) and Drones
REFERENCES
1. Kolb, D.A. (2015) Experiential Learning: Experience as the Source of
Learning and Development. 2nd Edition, Pearson Education, Inc.
2. McLeod, S.A. (2017) Kolb-Learning Styles and Experiential Learning
Cycle. Simple Psychology. https://www.simplypsychology.org/
learning-kolb.html
3. Loyola, L.C., Marx, A.J. and Fleming, S.D. (2018) Combining
Teaching, Partnerships, and Research in the Field: Lessons from the
Spatial Data Acquisition Course (on Catalina Island). GIS-Pro &
CalGIS 2018, Palm Springs, 8-12 October 2018.
4. Kays, R., Sheppard, J., Mclean, K., Welch, C., Paunescu, C., Wang,
V., Crofoot, M., et al. (2019) Hot Monkey, Cold Reality: Surveying
Rainforest Canopy Mammals Using Drone-Mounted Thermal Infrared
Sensors. International Journal of Remote Sensing, 40, 407-419. https://
doi.org/10.1080/01431161.2018.1523580
5. Hodgson, J.C., Mott, R., Baylis, S.M., Pham, T.T., Wotherspoon, S.,
Kilpatrick, A.D., Koh, L.P., et al. (2018) Drones Count Wildlife More
Accurately and Precisely than Humans. Methods in Ecology and
Evolution, 9, 1160-1167. https://doi.org/10.1111/2041-210X.12974
6. Chrétien, L.P., Théau, J. and Ménard, P. (2016) Visible and Thermal
Infrared Remote Sensing for the Detection of White-Tailed Deer Using
an Unmanned Aerial System. Wildlife Society Bulletin, 40, 181-191.
https://doi.org/10.1002/wsb.629
7. Lhoest, S., Linchant, J., Quevauvillers, S., Vermeulen, C. and Lejeune,
P. (2015) How Many Hippos (HOMHIP): Algorithm for Automatic
Counts of Animals with Infra-Red Thermal Imagery from UAV.
International Archives of the Photogrammetry, Remote Sensing and
Spatial Information Sciences, 40, 355-362. https://doi.org/10.5194/
isprsarchives-XL-3-W3-355-2015
CHAPTER
18
Development of Drone Cargo Bay with
Real-Time Temperature Control
ABSTRACT
In order to deliver medical products (medicines, vaccines, blood packs, etc.)
in time for needed areas, a method of transporting goods using drones is
being studied. However, temperature-sensitive medical products may decay
due to outside temperature changes. The time required to transport over the
distance may vary a lot as well. As a result, the likelihood of the goods
deteriorating is very high. There is a need for a study on cargo bay to prevent
Citation: Lee, S. and Kwon, Y. (2019), “Development of Drone Cargo Bay with Real-
Time Temperature Control”. World Journal of Engineering and Technology, 7, 612-
621. doi: 10.4236/wjet.2019.74044.
Copyright: © 2019 by authors and Scientific Research Publishing Inc. This work is li-
censed under the Creative Commons Attribution International License (CC BY). http://
creativecommons.org/licenses/by/4.0
386 Unmanned Aerial Vehicles (UAV) and Drones
this and to protect the medical goods. In this paper, in order to protect the
temperature sensitive medical goods, the inside cargo bay is equipped with
the cooling fan device and the electric heating elements. These elements
can be monitored and controlled according to the user’s discretion. By
using the web server built inside the cloud server, the temperature can be
controlled in real-time from anywhere without the limitation of distance.
We built the proposed device, and installed it on the drone cargo bay. The
test results show that the cargo bay can be temperature-controlled, and the
setting can be maintained over a great distance. The user can watch the
temperature variations during the transport and ascertain the goodness of
the medical supply with the data. It is expected that such development can
greatly enhance the utility of the drone operations, especially for the medical
supply transport applications.
Keywords: Real-Time Control, Cargo Bay Temperature, Cloud Server,
Medial Transport, Drones
INTRODUCTION
In many countries around the world, a fair accessibility to roads varies
according to the geographic characteristics and weather conditions. As a
result, it is difficult to transport items, such as medical supplies (medicines,
vaccines, blood packs, etc.) when needed [1]. Currently, there is a variety of
methods for transporting medical products by walking, transporting vehicles,
helicopters, and airplanes. However, these methods have limitations in terms
of distance, road accessibility, delivery costs, and geographic conditions
[1]. To overcome these limitations, methods for transporting medical
products using drones are being studied. The use of drones among various
methods has already been extended to many industries [2] [3]. In addition,
the transport of goods using drones has been steadily studied in recent
years [4]. According to the papers that studied the effects of transporting
blood by drones, the use of unmanned aircraft does not affect the quality
of the products, such as coagulation and deterioration of medicines and
biochemical samples [5]. However, the factor mostly affecting the goods in
transport is the external environment. Depending on various factors such as
region, season and weather, medical and various items stored in drone cargo
bay may deteriorate. Therefore, there is a need for a method for protecting
the medical supplies from outside temperature changes.
Currently, a common method for lowering the temperature inside the
drone cargo bay is by arranging dry ice, and ice packs. Those include the
Development of Drone Cargo Bay with Real-Time Temperature Control 387
Rwanda’s blood transport drone operations and a few other countries. This
method does not have a means to raise or control the internal temperature
to the desired setting. The existing method is simple, but due to various
external requirements, it is difficult to avoid deterioration of stored items.
This is because the type, place, time, and storage temperature of the stored
items are all different. For this reason, there is a need for a method capable of
controlling the cargo bay temperature. To solve this problem, we developed
a technology to control the temperature inside the drone cargo bay in real-
time. The structure of this paper shows Chapter 2 as related works, the
configuration of temperature control system in Chapter 3, and hardware
development in Chapter 4. Chapter 5 describes software development, and
Chapter 6 describes the prototyping results. Finally, the last chapter includes
concluding remarks.
RELATED WORKS
In recent years, various approaches have been proposed for the delivery of
medical items (blood, medical supplies, defibrillators, life-rings, etc.) using
the drones [5]. One of these methods is to develop an efficient delivery
system to overcome the hurdles from mountains and buildings during the
flight, or to operate and manage drones using a modular design approach
to effectively accommodate the various cargo types [6] [7]. However, the
methods proposed or introduced in the papers have not considered the
goodness of the transport items during the flight. In addition, the paper
for improving the goodness of transport items is mainly dealt with cargo
bay insulation to maintain the internal temperature. In most cases, the
method focuses on maintaining a low temperature. It shows no result about
transporting items that need to be stored at high temperature. It is difficult
to find the related study that shows the flexible temperature control of the
drone cargo bay [8] [9]. In this regard, the method to control the temperature
of cargo bay by raising or lowering the temperature remotely can be viewed
as quite innovative [10] [11].
control module that exchanges data with the database, mainly on Raspberry
Pi. This controller controls the attached devices and sensors. Software is
largely divided into cloud server and web browser. Client and database that
can send and receive data to and from the cloud server are configured. The
data can be transmitted to the UI (user interface) of web browser through
terminal. The real-time temperature control method proposed in this study
is the method of building various servers for network connectivity through
cloud sever, and controlling the cargo bay module using the deployed server,
which can check and command the data of the cargo bay module using the
UI.
HARDWARE DESIGN
The temperature-controlled drone cargo bay proposed in this study is
730-mm wide, 250-mm long and 200-mm high, as shown in Figure 2.
This is designed by considering the payload restrictions of the drone. In
order to effectively lower the internal temperature during the flight, the air
circulation port was fabricated. The cooling fan plays the role of taking the
air to the cargo bay, which can lower the internal temperature. Thermic ray,
which generates heat through the electricity supply, was used, and a barrier
membrane was designed to maintain the cargo bay temperature efficiently.
The barrier membrane also divided the cargo bay interior space into an
interior compartment for storing goods and an exterior compartment for
installing temperature control devices. Table 1 and Table 2 show the details.
SOFTWARE DESIGN
Network Connection
It is implemented that a communication using RF sensor as a way to check
and control the temperature inside cargo bay in real time [12]. However,
RF communications have adopted a network connection method to address
distance constraints. A network connection using a specific IP address by
building a web server in the controller’s memory is not possible due to the
nature of the drone when the network is disconnected due to geographic
distances. Therefore, we designed a static IP that plays an intermediate
bridge role by building a web server through a cloud server. The web server
was installed using “Apache, PHP, MySQL” on the cloud server, and data
were running with the use of interlocks to receive data from the controller
and send commands. Table 3 shows the data interlocking list for network
connections. All six data linkage lists are implemented in PHP language
through Terminal. It is designed to display and command data on the UI
through interlocked data.
The control UI is designed to prevent users from receiving unnecessary
information through the controller’s kernel window or complicating the
controller’s command statements. It is designed to receive the necessary
information quickly or issue commands. The first box of each configuration
in Figure 3 shows the current temperature, as well as the upper and lower
limits of the temperature. The second is the upper and lower limit input
fields of temperature. The third is the operating status of thermic ray and
cooling fan, and the on/off control switch of the controller power. The fourth
box is a graph showing the cargo bay’s internal temperature change, updated
continuously over time to show the changes in temperature.
Development of Drone Cargo Bay with Real-Time Temperature Control 391
The temperature control algorithm set the basic operation of the thermal
ray and cooling fan to zero as shown in Figure 4. After that, the upper and
lower limit values are entered and the algorithm is repeated or terminated
according to each condition. For example, in order to store an article having
a storage temperature of 25˚C to 26˚C, the upper limit value is set to 26˚C
and the lower limit is set to 25˚C. After inputting the upper and lower limit
and the internal temperature is higher than the upper limit, the cooling fan
becomes operated. If the internal temperature is lower than the lower limit,
the Thermic ray becomes activated. If the internal temperature is between
the upper and lower limits, the current temperature is maintained.
PROTOTYPE DESIGN
Figure 5 shows the VTOL (vertical take-off and landing) drone equipped
with a prototype temperature controlled cargo bay. Since the payload is about
2 kg, the weight of the medical product as a transport item is considered.
The cargo bay prototype was built with a weight of around 1.5 kg. To check
whether the temperature inside the cargo bay is controlled and maintained,
two temperature sensors were used to measure the current temperature. The
temperature inside the cargo bay was checked for 15 minutes in every 10
seconds. As shown in Figure 5, the upper and lower limits of the cargo bay
Development of Drone Cargo Bay with Real-Time Temperature Control 393
temperature were adjusted to 25˚C and 24˚C, respectively, and the cargo bay
temperature was measured to be maintained at the desired setting.
It took about three minutes for the cargo bay internal temperature to
reach the set point at 25.4˚C. After that, when the internal temperature is out
of the set temperature, the cooling fan operates to maintain 25˚C. In order to
check whether the temperature is maintained by thermic ray, it is verified by
setting the cargo bay internal temperature to 26˚C - 27˚C, as shown in Figure
6. Compared to the cooling fan, the set temperature was reached in about 40
seconds. If the temperature goes out of the set temperature, the thermic ray
is operated to increase the temperature to maintain the internal temperature,
and there is an error of about 0.1˚C.
The controller can effectively recognizes and controls the setting
measured by the temperature sensors. However, an error occurs between the
set temperature and the holding temperature.
The error in temperature is due to the delay in the data and must react
immediately when the temperature changes. In fact, the precise measurement,
control, and data processing require the use of high-performance computers,
sensors, and devices. In this study, however, due to the limitations of weight
and payload of the drone, the use of high performance precision device was
not feasible (Table 4).
Figure 6. External temperature vs. cargo bay internal temperature. (a) Graph of
internal temperature and external temperature; (b) Graph of internal tempera-
ture and external temperature.
CONCLUSION
This paper deals with the development of temperature controlled cargo bay
to protect goods from decay due to external environmental factors. A small
Development of Drone Cargo Bay with Real-Time Temperature Control 395
controller, Raspberry Pi, was used to control the temperature control device,
and a cloud server was used to solve the communication distance limitations.
As a result of detecting the change of the cargo bay internal temperature
for 15 minutes, it was confirmed that the set temperature was effectively
maintained and monitored, despite some small errors. The developed device
can be used to protect and safely transport items that are easily corrupted
and deteriorated, such as medical blood products that are very sensitive to
outside temperature variations. Even though the proposed device showed a
small range of errors during the operation, this problem can be easily solved
by using more accurate devices. The method provides a new approach to
either raising or lowering the inside temperature of drone cargo bay, and such
can be done remotely in real-time. By doing so, the temperature variations
during the flight can be verified, and the end user can be assured that the
transport items have arrived safely.
ACKNOWLEDGEMENTS
This work was supported by the Hyundai-NGV Future Technology Research
Fund.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest regarding the publication of this
paper.
396 Unmanned Aerial Vehicles (UAV) and Drones
REFERENCES
1. Laksham, K.B. (2019) Unmanned Aerial Vehicle (Drones) in Public
Health: A SWOT Analysis. Journal of Family Medicine and Primary
Care, 8, 342-349. https://doi.org/10.4103/jfmpc.jfmpc_413_18
2. Griffiths, F. and Ooi, M. (2018) The Fourth Industrial Revolution-
Industry 4.0 and IoT. IEEE Instrumentation & Measurement Magazine,
21, 29-43. https://doi.org/10.1109/MIM.2018.8573590
3. Li, B., Fei, Z. and Zhang, Y. (2019) UAV Communications for 5G and
Beyond: Recent Advances and Future Trends. IEEE Internet of Things
Journal, 6, 2241-2263. https://doi.org/10.1109/JIOT.2018.2887086
4. Klinkmueller, K.M., Wieck, A.J., Holt, J.K. and Valentine, A.W. (2019)
Airborne Delivery of Unmanned Aerial Vehicles via Joint Precision
Airdrop Systems. AIAA Scitech 2019 Forum, San Diego, CA, 7-11
January 201, 2285. https://doi.org/10.2514/6.2019-2285
5. Amukele, T., Ness, P.M., Tobian, A.R., Boyd, J. and Street, J. (2016)
Drone Transportation of Blood Products. Transfusion, 57, 582-588.
https://doi.org/10.1111/trf.13900
6. Scott, J.E. and Scott, C.H. (2017) Drone Delivery Models for
Healthcare. Proceedings of the 50th Hawaii International Conference
on System Sciences, Hilton Waikoloa Village, HI, 4-7 January 2017,
3297-3304. https://doi.org/10.24251/HICSS.2017.399
7. Lee, J.H. (2017) Optimization of a Modular Drone Delivery System.
2017 IEEE International Systems Conference, Montreal, 24-27 April
2017, 1-8.
8. Erdos. D., Erdos. A. and Watkins, S.E. (2013) An Experimental
UAV System for Search and Rescue Challenge. IEEE Aerospace and
Electronic Systems Magazine, 28. 32-37. https://doi.org/10.1109/
MAES.2013.6516147
9. Xiang, G., Hardy, A., Rajeh, M. and Venuthurupalli, L. (2016) Design
of the Life-Ring Drone Delivery System for Rip Current Rescue. 2016
IEEE Systems and Information Engineering Design Symposium,
Charlottesville, VA, 29April 2016, 181-186. https://doi.org/10.1109/
SIEDS.2016.7489295
10. https://www.hankookilbo.com/News/Read/201903201101062687
11. https://fortune.com/2019/01/07/delivery-drones-rwanda/
12. Lee, S.-H., Yang, S.-H. and You, Y.-M. (2017) Design and Development
of Agriculture Drone Battery Usage Monitoring System Using
Development of Drone Cargo Bay with Real-Time Temperature Control 397