Fyp Report
Fyp Report
Mudassar Hussain
2030-0142
Ahsan Mansha
2030-0123
Supervisor:
Dr. Abdul Khaliq
Co-Supervisor:
Dr. Yasir Jan
_________________________ _______________________
_____________________________
2030-0123
Ahsan Mansha
Supervisor: Chairman
Dr Abdul Khaliq Department of Electrical and
Professor Computer Engineering
ii
Dedication
We dedicate our final year project, "Autonomous Exploring and Mapping Robot," to our family and many
friends. A special feeling of gratitude and respect goes to our loving parents, whose words of encouragement
and push for tenacity ring in our ears.
We also dedicate this project to our many friends, our supervisor, Co-supervisor, advisor, and other faculty
members who have supported us throughout this journey. Your guidance, support, and belief in our abilities
have been instrumental in making this project a reality.
iii
Acknowledgments
We would like to express our deepest gratitude to everyone who has contributed to the successful
completion of our final year project, "Autonomous Exploring and Mapping Robot."
First and foremost, we extend our heartfelt thanks to our supervisor, Dr. Abdul Khaliq, for their invaluable
guidance, insightful feedback, and unwavering support throughout this project. Your expertise and
encouragement have been pivotal in shaping our work.
We are profoundly grateful to our co-supervisor, Dr. Yasir Jan, whose advice and direction have been
instrumental in overcoming numerous challenges. Your mentor-ship has been a cornerstone of our success.
We also wish to thank our families for their unending love, patience, and encouragement. Your belief in us
has been a constant source of motivation.
Finally, to our friends and peers, thank you for your camaraderie, assistance, and the countless discussions
that have enriched our understanding and made this journey memorable.
This project is a collective achievement, and we are deeply appreciative of everyone who has contributed to
it. Thank you.
Mudassar Hussain
Ahsan Mansha
Syed Mujahid Bin Nauman
iv
Abstract
The "Autonomous Exploring and Mapping Robot" project focuses on developing a sophisticated robotic
system designed to navigate and map unknown environments autonomously. Utilizing the advanced sensor
LiDAR the robot can perceive and interpret its surroundings with high precision. Central to its functionality
is the implementation of simultaneous localization and mapping (SLAM) algorithms, which enable the robot
to construct and update maps in real-time while accurately tracking its position within these maps. This
capability is critical for ensuring the robot's effective navigation and situational awareness.
To enhance its autonomous navigation, the robot employs machine learning techniques for path planning and
obstacle avoidance. These techniques allow the robot to dynamically adapt to changes in the environment,
making decisions on the ground to avoid obstacles and select optimal paths. The integration of these
technologies ensures that the robot can explore and map complex environments efficiently and reliably.
The project's ultimate aim is to create a versatile and robust robotic platform that can be deployed in various
applications. Potential uses include search and rescue operations, where the robot can quickly and safely
explore hazardous areas; environmental monitoring, where it can gather data in remote or dangerous
locations; and industrial automation, where it can navigate and operate within dynamic and unpredictable
settings. By achieving significant advancements in autonomous navigation and environmental mapping, this
project aspires to contribute to the broader field of robotics, enhancing the capabilities and applications of
autonomous systems in real-world scenarios.
v
Table of Content
Declaration ............................................................................................................................................................................................. i
Final Approval ...................................................................................................................................................................................... ii
Dedication ............................................................................................................................................................................................ iii
Acknowledgments ................................................................................................................................................................................iv
Abstract ................................................................................................................................................................................................. v
List of Figures ....................................................................................................................................................................................viii
List of Tables ....................................................................................................................................................................................... ix
Chapter 1 ..............................................................................................................................................................................................1
Introduction ......................................................................................................................................................................................... 1
1.1 Project Overview .............................................................................................................................................................................1
1.2 Problem Statement .......................................................................................................................................................................... 1
1.3 Project Objectives ........................................................................................................................................................................... 2
1.4 Expected Outcomes .........................................................................................................................................................................3
1.5 Project Methodology .......................................................................................................................................................................3
1.6 Report Outline .................................................................................................................................................................................5
Chapter 2 ..............................................................................................................................................................................................7
Literature Review ................................................................................................................................................................................7
2.1 Literature Review ............................................................................................................................................................................7
2.1.1 Key Terms/Concepts ........................................................................................................................................................... 7
2.1.2 Scope of Review ..................................................................................................................................................................8
2.2 Analysis Of Mobile Robot Indoor Mapping Using Gmapping Based Slam With Different Parameter [13] ................................. 8
2.3 The Sensor Based Random Graph Method For Cooperative Robot Exploration [8] ..................................................................... 9
2.4 A Frontier-Based Approach For Autonomous Exploration [6] ...................................................................................................... 9
2.5 A Path Planning Approach To Compute The Smallest Robust Forward Invariant Sets [5] ......................................................... 10
2.6 Real Time Autonomous Ground Vehicle Navigation In Heterogeneous Environments Using a 3D Lidar [11] ..........................10
2.7 Autonomous Robotic Exploration Based On Multiple Rapidly-Exploring Randomized Trees [1] ............................................. 11
Chapter 3 ............................................................................................................................................................................................14
System Design .................................................................................................................................................................................... 14
3.1 System Design ...............................................................................................................................................................................14
3.1.1 Block Diagram .................................................................................................................................................................. 14
3.2 Hardware Design ...........................................................................................................................................................................16
3.2.1 Turtlebot3 .......................................................................................................................................................................... 17
3.2.2 LiDAR ............................................................................................................................................................................... 18
3.2.3 Raspberry Pi 4 ................................................................................................................................................................... 19
3.2.4 Motor Controllers and Motors ...........................................................................................................................................20
3.3 Software Design ............................................................................................................................................................................ 21
3.3.1 ROS ................................................................................................................................................................................... 21
3.3.2 Gazebo ...............................................................................................................................................................................21
3.3.3 Rviz ................................................................................................................................................................................... 22
3.4 Flow Chart .....................................................................................................................................................................................22
3.5 Inputs, Outputs, and Processing .................................................................................................................................................... 25
3.5.1 Inputs ................................................................................................................................................................................. 25
3.5.2 Outputs .............................................................................................................................................................................. 25
3.5.3 Processing - Algorithms Explanation ................................................................................................................................25
Chapter 4 ............................................................................................................................................................................................27
Algorithm Implementation ...............................................................................................................................................................27
4.1 Introduction to SLAM ...................................................................................................................................................................27
4.1.1 SLAM Algorithm Selection and Rationale ....................................................................................................................... 28
4.2 RRT Algorithm and Adaptions for Dynamic Obstacles ............................................................................................................... 30
4.2.1 Basic RRT Algorithm ........................................................................................................................................................30
4.2.2 Adaptations for Dynamic Obstacles ..................................................................................................................................31
4.2.3 Implementation considerations ..........................................................................................................................................32
Chapter 5 ............................................................................................................................................................................................33
Experimental Setup ...........................................................................................................................................................................33
5.1 Setup for Experiment .................................................................................................................................................................... 33
5.1.1 Description of Testing Environment ................................................................................................................................. 33
5.1.2 Map Generation .................................................................................................................................................................34
5.2 ROS (Robot Operating System) Configuration: ........................................................................................................................... 36
5.3 Launch Files Setup ........................................................................................................................................................................37
5.3.1 Problem Being Solved .......................................................................................................................................................37
5.3.2 Proposed Solution ..............................................................................................................................................................37
5.3.3 Pseudocode ........................................................................................................................................................................37
vi
5.3.4 Code ...................................................................................................................................................................................38
5.4 SLAM Initialization ...................................................................................................................................................................... 38
5.4.1 Problem Being Solved .......................................................................................................................................................38
5.4.2 Proposed Solution ..............................................................................................................................................................38
5.4.3 Pseudocode ........................................................................................................................................................................39
5.4.4 Code ...................................................................................................................................................................................40
5.5 RRT Exploration Initialization ......................................................................................................................................................40
5.5.1 Problem Being Solved .......................................................................................................................................................40
5.5.2 Proposed Solution ..............................................................................................................................................................40
5.5.3 Pseudocode ........................................................................................................................................................................41
5.5.4 Code ...................................................................................................................................................................................41
5.6 Autonomous Navigation ............................................................................................................................................................... 42
5.7 Hardware Setup ............................................................................................................................................................................. 43
5.7.1 Connection with Raspberry Pi 4 ........................................................................................................................................44
5.7.2 Real-Time Ranging Visualization ..................................................................................................................................... 44
5.8 Limitations .................................................................................................................................................................................... 50
5.8.1 Driver Availability ............................................................................................................................................................ 51
5.8.2 Computational Constraints ................................................................................................................................................ 51
5.8.3 Implementation Complexity ..............................................................................................................................................51
5.8.4 Reliability and Robustness ................................................................................................................................................ 52
Chapter 6 ............................................................................................................................................................................................53
Results and Analysis ......................................................................................................................................................................... 53
6.1 Outcomes and Observations ..........................................................................................................................................................53
6.2 Detailed Findings .......................................................................................................................................................................... 54
6.2.1 Map Generation .................................................................................................................................................................54
6.2.2 Map Navigation .................................................................................................................................................................56
6.2.3 Impact of Motor Speed on Data Collection and Mapping Accuracy in Autonomous Robots ..........................................58
Chapter 7 ............................................................................................................................................................................................59
Sustainable Development ..................................................................................................................................................................59
7.1 Introduction to Sustainable Development .....................................................................................................................................59
7.1.1 Autonomous Exploration and Mapping Robot in Sustainable Development ................................................................... 59
7.2 Contribution to Sustainable Development .................................................................................................................................... 60
7.2.1 Environmental Monitoring and Conservation ...................................................................................................................60
7.2.2 Urban Planning and Smart Cities ...................................................................................................................................... 61
7.2.3 Agriculture and Food Security .......................................................................................................................................... 61
7.2.4 Disaster Management ........................................................................................................................................................ 62
7.2.5 Scientific Research and Education .................................................................................................................................... 63
7.2.6 Renewable Energy Management .......................................................................................................................................63
7.2.7 Reducing Carbon Footprint ............................................................................................................................................... 63
7.3 Effect on environment ...................................................................................................................................................................65
7.3.1 Environmental Monitoring and Conservation ...................................................................................................................65
7.3.2 Urban Planning and Smart Cities ...................................................................................................................................... 66
7.3.3 Agriculture and Food Security .......................................................................................................................................... 66
7.3.4 Disaster Management ........................................................................................................................................................ 67
7.3.5 Scientific Research and Education .................................................................................................................................... 67
7.3.6 Renewable Energy Management .......................................................................................................................................67
7.3.7 Reducing Carbon Footprint ............................................................................................................................................... 68
Chapter 8 ............................................................................................................................................................................................70
Conclusion ..........................................................................................................................................................................................70
8.1 Summary ....................................................................................................................................................................................... 70
8.1.1 Summary of Objectives and Achievements ...................................................................................................................... 70
8.1.2 Key Findings and Results ..................................................................................................................................................70
8.1.3 Challenges Faced ...............................................................................................................................................................72
8.2 Conclusion .....................................................................................................................................................................................72
References ........................................................................................................................................................................................... 73
vii
List of Figures
Figure 3.1: Block Diagram of Autonomous Robot System Work Flow ..........................................................................14
Figure 3.2: Turtlebot3 ...................................................................................................................................................... 17
Figure 3.3: Working Principle of LiDAR ........................................................................................................................ 18
Figure 3.4: LiDAR ........................................................................................................................................................... 19
Figure 3.5: Raspberry Pi 4 ............................................................................................................................................... 20
Figure 3.6: Environment of a Maze ................................................................................................................................. 21
Figure 3.7: Turtlebot3 in Gazebo Simulator .................................................................................................................... 22
Figure 3.8: Flow Chart of Autonomous Exploration Algorithm ..................................................................................... 23
Figure 4.1: Exploration Strategy of SLAM ..................................................................................................................... 28
Figure 4.2: Flowchart of Working Principle of Gmapping Algorithm ............................................................................ 30
Figure 4.3: Working of RRT ............................................................................................................................................31
Figure 4.4: Nodes in Tree Structure of RRT ...................................................................................................................32
Figure 5.1: Mapping Environment ...................................................................................................................................33
Figure 5.2: Generating Map by turtlebot3 (using LIDAR sensor) ...................................................................................34
Figure 5.3: Environment Mapping by Turtlebot3 ............................................................................................................36
Figure 5.4: Navigation of Mapped Environment ............................................................................................................. 43
Figure 5.5: LiDAR ........................................................................................................................................................... 43
Figure 5.6: Connection with Raspberry Pi 4 ....................................................................................................................44
Figure 5.7: Flow Chart of Real Time Ranging Visualization ..............................................................................................45
Figure 6.1 (a): Environment .............................................................................................................................................53
Figure 6.1 (b): Robot Performance .................................................................................................................................. 53
Figure 6.2: Distance traveled by Robot ........................................................................................................................... 54
Figure 6.3 (a): Autonomous Map Navigation ..................................................................................................................56
Figure 6.3 (b): Autonomous Map Navigation ..................................................................................................................57
Figure 6.3 (c): Autonomous Map Navigation ..................................................................................................................57
Figure 7.1: Illustrates the Sustainable Development Goals ............................................................................................. 59
Figure 8.1: Implementation of SLAM ............................................................................................................................. 70
Figure 8.2: Illustration of Autonomous Navigation using RRT Algorithm .....................................................................71
viii
List of Tables
ix
Chapter 1
Introduction
Autonomous exploring and mapping robots employ a sophisticated combination of sensors [2] algorithms,
and control systems to perceive their environment, plan paths, avoid obstacles, and generate maps. The
integration of computer vision and artificial intelligence allows these robots to interpret complex data and
make real-time decisions, significantly enhancing their efficiency and reliability. These capabilities are
crucial for applications such as disaster response, where robots need to operate in unpredictable and often
dangerous environments, and industrial automation, where they can streamline processes and improve safety.
The importance of autonomous exploring and mapping robots is underscored by the growing demand for
automated solutions in various industries. For instance, in search and rescue operations [11] autonomous
robots can enter dangerous areas, locate survivors, and provide critical information to human responds. In
agriculture, they can map fields, monitor crop health, and identify areas needing attention, thereby increasing
efficiency and yield. In industrial settings, they can navigate large warehouses, track inventory, and ensure
operational efficiency without human intervention. The versatility and potential of these robots make them a
focal point of modern robotics research and development.
1
ability to navigate and map effectively [7]. These challenges necessitate the development of more advanced
systems that can adapt to changing conditions and make intelligent decisions autonomously.
Autonomous Navigation: Develop a robot capable of navigating unknown environments without human
intervention. This includes real-time path planning and obstacle avoidance using advanced sensors and
algorithms. The navigation system should be able to operate in both structured and unstructured
environments, adapting to different conditions and challenges.
Environment Mapping: Implement techniques for creating detailed and accurate maps of the explored area.
This involves using SLAM (Simultaneous Localization and Mapping) to concurrently map the environment
and track the robot's location within it [3]. The mapping system should be able to handle dynamic changes in
the environment and update the map in real-time.
Obstacle Detection and Avoidance: Integrate sensors and algorithms to detect and avoid obstacles
dynamically, ensuring the robot can navigate safely and efficiently. This includes developing robust obstacle
detection mechanisms that can handle different types of obstacles, from static objects to moving entities.
Real-Time Data Processing: Ensure the robot can process sensor data and make navigation decisions in
real-time, maintaining high performance and accuracy [11]. This involves optimizing the computational
algorithms and hardware to ensure quick and reliable data processing.
System Integration: Achieve seamless integration of hardware and software components, including sensors,
micro-controllers, and algorithms, to create a cohesive and functional autonomous robot [2]. The integration
should ensure that all components work together harmoniously, providing a reliable and efficient system.
Scalability and Adaptability: To design a scalable and adaptable system, it's essential to focus on
modularity and flexibility from the outset. This can be achieved by developing a robust architecture that
supports plug-and-play components, enabling the system to be easily extended or modified for various
applications. By using standardized interfaces and protocols, the system can seamlessly integrate new
technologies or adapt to different environments and tasks. Additionally, incorporating machine learning
algorithms allows the system to learn from its surroundings and improve its performance over time,
enhancing its versatility. Designing with open-source tools and frameworks can also facilitate community
contributions, driving innovation and broadening the system's potential for use in diverse fields.
Emphasizing user-friendly interfaces and comprehensive documentation ensures that the system can be
effectively utilized and adapted by users with varying levels of technical expertise. By prioritizing scalability
2
and adaptability, the system can meet the demands of current applications while remaining prepared for
future challenges and opportunities.
Functional Autonomous Robot: A fully functional robot capable of autonomous exploration and mapping
in various environments. The robot should be able to navigate, detect obstacles, and create accurate maps
without human intervention. This outcome includes demonstrating the robot's capabilities in different
scenarios, such as indoor navigation, outdoor exploration, and complex environments.
Enhanced Mapping Techniques: Improved techniques for real-time environment mapping, leveraging
advancements in computer vision and machine learning to enhance the accuracy and detail of generated
maps. This includes developing new algorithms for SLAM and exploring innovative approaches to map
representation and visualization.
Reliable Obstacle Avoidance: Robust obstacle detection and avoidance mechanisms that ensure safe and
efficient navigation in dynamic and unpredictable environments. This involves testing the robot's ability to
handle various types of obstacles and ensuring that the avoidance system is reliable and effective.
Scalable and Adaptable System: A scalable and adaptable system design that can be extended or modified
for various applications, demonstrating the potential for broader use in different domains. This includes
showcasing the system's versatility and its ability to be customized for specific tasks and environments.
Contribution to Research and Development: Valuable contributions to the field of autonomous robotics,
providing insights, methodologies, and data that can inform future research and development efforts. This
includes publishing research findings, sharing data sets, and contributing to the academic and industrial
communities.
Potential Applications: Identification of potential applications and impact of the autonomous exploring and
mapping robot in fields such as search and rescue operations, environmental monitoring, industrial
automation, agriculture, and more. This includes exploring real-world applications and demonstrating the
practical benefits of the developed system.
Ubuntu Installation: Begin by installing the Ubuntu operating system, which is commonly used in robotics
due to its compatibility with ROS.
ROS Setup: Install ROS, which provides a collection of tools and libraries to develop robot software. It
simplifies the process of building robot applications by providing infrastructure for communication between
different parts of the robot system.
RRT Algorithm: Implement the Rapidly-exploring Random Tree (RRT) algorithm. RRT is an efficient
path-planning algorithm particularly suited for navigating unknown environments. It works by incrementally
building a tree from the robot's starting point and exploring possible paths until it finds a path to the target
location.
SLAM Implementation: Use SLAM to simultaneously create maps of the environment and track the robot's
position within it. This allows the robot to update its map in real-time as it explores, maintaining accuracy in
dynamic or unknown environments.
Gazebo Simulation: Utilize Gazebo, a simulation environment that allows you to test navigation and
mapping algorithms in a virtual setting before deploying them on the actual robot. This helps in identifying
issues early and refining algorithms without risking hardware damage.
v. Algorithm Testing
SLAM Verification: Verify the precision of SLAM-generated maps in simulation to ensure the robot
accurately understands its surroundings.
Exploration Testing: Test the robot’s ability to explore environments efficiently, ensuring it covers the area
effectively without unnecessary repetition.
RRT Performance Evaluation: Evaluate the performance of the RRT algorithm to ensure it generates
optimal paths, minimizing travel time and avoiding obstacles.
Algorithm Refinement: Refine and test algorithms to ensure they are reliable and work seamlessly together.
4
Cohesive System Development: Combine successful algorithms to create a cohesive system that allows the
robot to autonomously explore and map its environment effectively.
Robot Construction: Assemble all hardware components necessary for the robot, such as sensors, actuators,
and processing units, ensuring they are compatible with the developed software.
Software and Hardware Integration: Merge the developed algorithms with the hardware to finalize the
autonomous exploring and mapping robot system. This involves ensuring that the software can effectively
control the hardware components and respond to real-world conditions.
Chapter 1: This chapter introduces the project, outlining its goals, significance, and scope. It sets the stage
for the entire report by explaining the problem the project aims to address and the expected outcomes.
Chapter 2: This chapter reviews existing research related to autonomous exploring and mapping robots. It
identifies key concepts, summarizes significant studies, and highlights gaps in the current literature.
Chapter 3: This chapter provides an overview of the system architecture, detailing both the hardware and
software components used. It explains how each component fits into the overall design and describes the
inputs, outputs, and processing required for the robot's operation.
Chapter 4: This chapter discusses the algorithms used in the project, focusing on the implementation of
SLAM (Simultaneous Localization and Mapping) and the RRT (Rapidly-exploring Random Tree) algorithm.
It explains how these algorithms are adapted for dynamic environments and the rationale behind their
selection.
Chapter 5: This chapter describes the experimental setup, including the testing environment, configuration,
and limitations. It outlines the process of initializing and testing the SLAM and RRT exploration algorithms
in both simulated and real-world environments.
Chapter 6: In this chapter, the results of the experiments are presented and analyzed. It discusses the
performance of the robot in map generation and navigation, and examines how factors such as motor speed
affect data collection and mapping accuracy.
5
Chapter 7: This chapter explores the robot's contributions to sustainable development, discussing its
potential applications in various fields and its positive impact on the environment.
Chapter 8: The final chapter summarizes the project's key findings, achievements, and challenges. It reflects
on the objectives and provides recommendations for future work.
References: A comprehensive list of the sources cited throughout the report, ensuring proper
acknowledgment and citation of relevant literature.
6
Chapter 2
Literature Review
LiDAR (Light Detection and Ranging): A remote sensing technology that uses laser light to measure
distances and create high-resolution 3D maps of the environment.
SLAM (Simultaneous Localization and Mapping): A computational problem and technique used in
robotics to construct or update a map of an unknown environment while simultaneously keeping track of the
robot's location within it.
Path Planning: The process by which a robot determines an optimal path from its current location to a
desired destination while avoiding obstacles.
Obstacle Avoidance: The capability of a robot to detect and navigate around obstacles in its path to ensure
safe and efficient movement.
Sensor Technologies: Examination of various sensory technologies, with a focus on LiDAR, and their roles
in environmental perception and mapping.
SLAM Algorithms: Analysis of different SLAM methodologies, their strengths, limitations, and
applications in autonomous navigation.
7
Machine Learning in Robotics: Exploration of machine learning techniques employed for path planning
and obstacle avoidance, highlighting recent advancements and practical implementations.
Applications of Autonomous Robots: Review of the potential applications of autonomous exploring and
mapping robots in fields such as search and rescue, environmental monitoring, and industrial automation.
This literature review will primarily focus on recent advancements and research within the last decade,
covering peer-reviewed journal articles, conference papers, and authoritative sources in the field of robotics
and autonomous systems. The review aims to provide a comprehensive understanding of the current state of
technology, identify emerging trends, and propose directions for future research. Special emphasis will be
placed on the integration of LiDAR and SLAM technologies, machine learning approaches for navigation,
and real-world applications that demonstrate the capabilities and benefits of autonomous exploring and
mapping robots.
2.2 Analysis Of Mobile Robot Indoor Mapping Using Gmapping Based Slam
With Different Parameter [13]
Mobile Robots can move freely in the environment using wheels or legs, used for tasks harmful to humans
or in unmapped areas. Mapping Creation of readable maps is done using sensors like cameras, sonar, and
laser sensors. Essential for autonomous navigation and search and rescue operations. It is especially useful in
hazardous environments, repetitive tasks, and exploration of unknown areas. Simultaneous Localization and
Mapping (SLAM) is critical for mobile robots, enabling them to create maps and localize themselves
simultaneously.
The experimental setup involved a mobile robot with Hokuyo Laser Range Finder and netbook, with
wireless communication through a router. The Experiment was conducted in two different lab environments,
assessing the impact of varying robot speed, mapping delay, and particle filter on mapping quality. The
results showed that varying parameters significantly affect mapping accuracy and processing time. Best
results achieved with a robot speed of 0.1333 m/s, mapping delay of 1s, and a particle filter of 30.
In conclusion the study demonstrates the importance of optimizing SLAM parameters for improving
mapping accuracy and efficiency in mobile robots.
The findings can be applied to enhance the performance of mobile robots in various practical applications,
including search and rescue and autonomous navigation [13].
8
2.3 The Sensor Based Random Graph Method For Cooperative Robot
Exploration [8]
PETIS (Programmable Vehicle with Integrated Sensor) aims to create an autonomous robot that can navigate
independently. The research focuses on the robot's sense of sight using the LDS-01 LIDAR sensor due to its
affordability and community support. The LIDAR (Light Detection and Ranging) sensor uses laser pulses to
measure distances, producing high-precision data for mapping environments. Compared to other sensors like
ultrasonic and camera-based systems, LIDAR provides better accuracy and faster processing.
The LDS-01 is a low-cost LIDAR sensor from ROBOTIS, capable of 360-degree scanning with a range of
12 cm to 350 cm, although tests showed it performs optimally between 29.9 cm and 290.7 cm. It connects to
a Raspberry Pi through a USB interface, with ROS (Robot Operating System) providing software support.
The study involved setting up the LIDAR sensor, ensuring proper hardware and software connections, and
processing the data .Data was collected in a controlled environment (5m x 5m classroom), capturing
readings over a 5-minute period. The data from the LIDAR was stored in text format due to its large size,
and processed into Cartesian coordinates using trigonometric functions.
The sensor produced around 537,297 rows of data per 5 minutes, with significant noise observed near the
sensor. Data from the LIDAR was stored in text format due to its large size, and processed into Cartesian
coordinates using trigonometric functions. The sensor produced around 537,297 rows of data per 5 minutes,
with significant noise observed near the sensor.
The LIDAR sensor's readings were mapped to a 2D plane, highlighting its effective range and limitations.
Statistical analysis of the data showed a normal distribution with specific quartiles and bounds.
Noise in the data needs to be addressed using filtering algorithms like the Extended Kalman Filter
(EKF).Future research will involve implementing SLAM (Simultaneous Localization and Mapping) to
improve environmental sensing and mapping accuracy.
The study successfully demonstrated the use of the LDS-01 LIDAR sensor for 2D mapping and boundary
detection. The sensor's performance was slightly below its specified range, suggesting the need for further
calibration and algorithmic improvements [8].
9
In a particular environment Evidence grids are used to represent the environment, with each cell storing the
probability of occupancy. The robot plans and executes paths to reach these detected frontiers. To improve
map accuracy, sonar data is combined with laser data to reduce the impact of specular reflections.
The paper presents experimental results obtained from a Nomad 200 mobile robot equipped with a laser
rangefinder, sonar sensors, and infrared sensors. The robot explored two real-world office environments
filled with various obstacles like chairs, desks, tables, and boxes.
The Frontier-based exploration method clearly showed advantage in its ability to handle both open and
cluttered spaces and its capability to deal with walls and obstacles in arbitrary orientations. It proved to
provide efficient exploration by focusing on information-rich frontiers [6].
Traditionally, computing the smallest RFIS has been computationally challenging. This method leverages
path planning algorithms, specifically the A* algorithm, to approximate the RFIS boundary. The state space
is discretized, and a cost function is employed to guide the path planning process. The algorithm iteratively
refines the RFIS approximation.
While effective for two-dimensional systems with additive disturbances, the method's applicability to higher
dimensions and different disturbance types remains an area for future exploration.
The proposed method is limited to two-dimensional systems with additive disturbances. The accuracy of the
approximation depends on the discretization level and other parameters [5].
10
The system successfully performed over 60 km of autonomous driving in various conditions without system
failures and it demonstrated robustness to map changes by successfully navigating with both current and
outdated maps. After comparison with RTK-GPS ground truth showed good localization accuracy, with
higher variance in dynamic or challenging environments (e.g., bushland).
Furthermore the use of a pyramidal multi-level surface map improved localization accuracy and efficiency.
The system's independence from GPS, pattern recognition, and artificial landmarks makes it suitable for
various outdoor environments.
The paper evaluates the C-LOC algorithm in a heterogeneous environment consisting of:
Densely built areas: The algorithm demonstrated reliable performance in areas with buildings, suggesting
that the 3D LiDAR and SLAM approach is well-suited for such environments.
Sparsely built areas and bushland: While the algorithm still functioned, its performance was less robust in
these areas due to the dynamic nature of vegetation and the potential for occlusion. This is reflected in the
higher localization variance observed in these regions.
Parking lots: The presence of moving cars introduced additional challenges, but the system was still able to
perform adequately.
The authors tested the system's resilience to map changes by using both current and outdated maps. The
vehicle successfully completed autonomous runs using a map that was 1.5 months old, demonstrating the
system's robustness to environmental changes.
While localization was still possible with the outdated map, the performance was slightly degraded, as
indicated by increased localization variance. This is expected due to changes in the environment over time.
Overall, the experimental results highlight the C-LOC algorithm's ability to handle diverse environments
while demonstrating the importance of map freshness for optimal performance [11].
Key Findings: The chosen parameters, including robot speed, mapping delay, and particle filter size,
significantly affect both the accuracy and time taken for map creation. This ensures that a balance between
robot speed, mapping delay, and particle filter size is crucial for achieving optimal mapping results. A faster
11
robot speed with a shorter mapping delay can reduce mapping time but may compromise accuracy due to
increased computational load and potential data loss.
Higher robot speeds and shorter mapping delays generally result in faster mapping times but can negatively
impact map accuracy. Conversely, lower speeds and longer delays improve accuracy but increase mapping
time. The complexity of the environment (size, feature density) also influences mapping performance.
Larger environments with fewer features require more time to map accurately.
While the authors’ focuses solely on G-Mapping and parameter optimization, it's essential to consider other
SLAM methods for a comprehensive understanding. A direct comparison within the paper is lacking, but
based on general knowledge and other research, we can outline the strengths and weaknesses of G-Mapping
relative to other common SLAM techniques.
G-Mapping, based on Rao-Blackwellized Particle Filters (RBPF), is a popular choice for 2D SLAM due to
its relative simplicity and computational efficiency. However, it has limitations compared to other methods.
G-Mapping is efficient and has the ability to handle dynamic environments, and it is relatively easy to
implement. However, G-Mapping Prone to particle depletion and its accuracy can degrade in large-scale
environments or with loop closures. Also it is limited to 2D mapping only [1].
12
Fast SLAM Efficient, dynamic Particle depletion,
environments accuracy in large-scale
environments
13
Chapter 3
System Design
14
The block diagram illustrates the comprehensive system design and workflow for developing an autonomous
exploring and mapping robot. Here is a detailed explanation of each component and its role in the overall
system:
Install Ubuntu: The process starts with the installation of the Ubuntu operating system, which is a
prerequisite for running ROS.
ROS Installation: ROS is installed on the Ubuntu operating system. ROS provides the necessary
tools and libraries for robot software development [12].
RRT Algorithm: The Rapidly-exploring Random Tree (RRT) algorithm is used for path planning
and navigation [7]. It helps in finding an efficient path for the robot to explore unknown
environments.
Cartographer Algorithm: This algorithm is used for SLAM, which involves building a map of the
environment while simultaneously keeping track of the robot’s position within that map.
Gmapping Algorithm: Another SLAM algorithm used for creating and updating the map in real-
time [14].
Gazebo Simulator: Gazebo is a simulation environment in ROS where the robot’s algorithms can be
tested virtually before implementing them on actual hardware [12]. It helps in evaluating the
performance of navigation and mapping algorithms.
v. Algorithm Testing
Mapping Accuracy: The accuracy of the maps generated by the SLAM algorithms is tested in the
simulation environment.
Exploration and Coverage: The robot's ability to explore and cover the environment efficiently is
tested.
Path Planning Performance: The performance of the path planning algorithms (like RRT) is
evaluated to ensure they find the optimal path.
15
vi. Decision Making
Accuracy Check: After testing, if the algorithms are accurate, they are combined. If not, further
refinement and testing are done.
Combine Algorithms: The successful algorithms are combined to create a robust system for
exploration and mapping.
Hardware Components:
TurtleBot3: The chosen robotic platform that includes all necessary components.
LiDAR: A sensor used for distance measurement and environment scanning.
OpenCR: An embedded controller board used to interface with sensors and actuators.
DYNAMIXEL Wheels: Actuators used for robot movement.
Raspberry Pi: A microcontroller that serves as the central processing unit.
Hardware Assembly: All hardware components are assembled to form the physical robot.
Software/Hardware Integration: The combined algorithms and software are integrated with the
hardware components to create the final autonomous exploring and mapping robot system [12].
This diagram provides a clear flow of how the autonomous exploring and mapping robot system is designed,
from initial software setup to algorithm development and testing, and finally to hardware integration and
deployment. Each step ensures that the robot is capable of accurately mapping and navigating its
environment autonomously.
16
operation while ensuring safety and efficiency. The processing unit, often a single-board computer like the
Raspberry Pi or an embedded system, handles sensor data processing, decision-making algorithms, and
communication with motor controllers [10]. It's essential to balance processing power and energy
consumption to maintain efficiency. Additionally, the robot should have a robust communication module for
remote monitoring and control. Overall, the hardware components must be selected and integrated to ensure
reliability, performance, and scalability. Proper selection and configuration of these components form the
foundation for successful autonomous exploration and mapping.
3.2.1 Turtlebot3
Turtlebot3 (Figure 3.2) is a low-cost, personal robot kit with open-source software, making it an ideal
platform for research, education, and product prototyping. It is designed to support the Robot Operating
System (ROS), [12] which provides libraries and tools to help software developers create robot applications.
Key features of the Turtlebot3 include:
Modular Design: The Turtlebot3's modular design allows users to easily customize and upgrade the robot
with different sensors, actuators, and computing platforms.
Compact Size: Its small footprint makes it suitable for navigating through tight spaces and performing tasks
in indoor environments.
Open Source: Both the hardware and software of the Turtlebot3 are open source, allowing for extensive
customization and community-driven improvements.
Scalability: The platform supports various configurations, from basic models suitable for beginners to more
advanced setups for complex research projects.
17
3.2.2 LiDAR
LIDAR (Light Detection and Ranging) is a remote sensing technology that uses laser light to measure
distances to objects. It works by emitting laser pulses, which then bounce back from objects to the sensor as
shown in the Figure 3.3. By measuring the time it takes for the pulses to return, the system calculates the
distance to the object.
Working Principle
Emission: The LIDAR sensor emits a laser pulse towards the target.
Reflection: The laser pulse hits an object and reflects back to the sensor.
Calculation: The system calculates the distance to the object based on the time it took for the pulse to return,
using the formula:
18
LiDAR
The LiDAR (Figure 3.4) is a compact and cost-effective distance sensor that provides accurate and reliable
distance measurements. It is widely used in robotics for navigation, obstacle detection, and mapping. Key
features of the LiDAR include:
High Precision: It offers accurate distance measurements with a range of up to 8 meters and an accuracy of
±6 cm.
Compact and Lightweight: The small size and low weight make it easy to integrate into various robotic
platforms.
Fast Response Time: It can provide up to 250 measurements per second, allowing for real-time obstacle
detection and avoidance.
Low Power Consumption: The sensor is energy-efficient, making it suitable for battery-powered
applications.
3.2.3 Raspberry Pi 4
The Raspberry Pi 4 (Figure 3.5) is a popular choice for the processing unit in many robotic systems,
including the Turtlebot3. It offers a good balance between performance, power consumption, and cost. Key
features of the Raspberry Pi 4 include:
Processor: The Raspberry Pi 4 is equipped with a quad-core ARM Cortex-A72 processor, running at 1.5
GHz. This provides sufficient computational power for most robotic applications.
19
Memory: It comes with multiple RAM options (2GB, 4GB, or 8GB), allowing users to choose based on
their performance requirements.
Connectivity: The Raspberry Pi 4 offers a range of connectivity options, including USB 3.0, Ethernet, Wi-
Fi, and Bluetooth, which are essential for interfacing with sensors, actuators, and other peripherals.
Expandability: It has multiple GPIO pins and interfaces (SPI, I2C, UART), enabling easy integration with
various sensors and modules.
Software Support: It runs a variety of operating systems, including Raspbian (Raspberry Pi OS) and
Ubuntu, both of which support ROS, making it a versatile and developer-friendly platform.
Speed Control: They provide precise control over the speed of the motors, which is crucial for tasks such as
navigation and path following.
Direction Control: Motor controllers allow for the easy reversal of motor direction, enabling the robot to
move forward, backward, and turn.
Torque Regulation: They help manage the torque delivered to the motors, which is important for
maintaining stability and handling various terrains.
Integration with ROS: The motor controllers used in Turtlebot3 are designed to integrate seamlessly with
the ROS framework, facilitating easy communication and control through ROS nodes and topics.
20
3.3 Software Design
Simulation environment is crucial in the development and testing of robotic systems. They allow for the safe
and cost-effective evaluation of algorithms and hardware configurations before deploying them in the real
world. The primary tools used in the simulation environment for our robotic system are:
3.3.1 ROS
The Robot Operating System (ROS) is a flexible framework for writing robot software. It provides a
collection of tools, libraries, and conventions aimed at simplifying the task of creating complex and robust
robot behavior across a wide variety of robotic platforms [9]. ROS is structured in a modular fashion,
allowing for the integration of different packages and components, which can be reused and shared across
various projects. Its communication infrastructure is based on nodes, topics, and services, enabling
distributed processing and seamless integration of sensors, actuators, and algorithms. ROS also includes
simulation capabilities and interfaces to several hardware abstraction layers, making it an essential tool for
both academic research and industrial applications.
3.3.2 Gazebo
Gazebo is a powerful open-source robotics simulator that integrates with ROS to provide a rich
development environment for testing and developing algorithms, designing robots, and performing
regression testing using realistic scenarios as shown in the Figure 3.6. It offers a high-fidelity physics
engine, a rich library of robot models and environments, and robust sensor simulation capabilities. Gazebo
enables users to simulate populations of robots (Figure 3.7) in complex indoor and outdoor environments,
with accurate rendering and dynamic interactions. The ability to model the physical properties of robots
and environments, including friction, gravity, and lighting, allows for detailed and realistic testing before
deployment in real-world scenarios.
3.3.3 Rviz
Rviz, short for ROS visualization, is a 3D visualization tool for ROS applications. It allows developers to
visualize sensor data, state information, and the robot’s environment in real-time. Rviz supports various
types of data, including point clouds, laser scans, occupancy grids, and transforms, making it an invaluable
tool for debugging and development [4]. Users can interact with the visualization by adding, removing,
and configuring displays for different data types, which helps in understanding the robot's perception and
actions within the environment. Rviz's flexibility and ease of use make it a crucial component in the
development and testing phases of robotic systems, aiding in the rapid identification and resolution of
issues.
22
Figure 3.8: Flow Chart of Autonomous Exploration Algorithm
i. Start
Global Search: Checks if the exploration index is 0 (no more beneficial goals found globally).
Yes: If the index is 0, the algorithm returns the finish mapping condition.
No: If the index is not 0, it stores the actual goal and exploration index, and continues the
process.
Local Search: Checks if the exploration index is 0 (no more beneficial goals found locally).
Yes: If the index is 0, the algorithm switches to global search and returns to the start.
No: If the index is not 0, it stores the actual goal and exploration index, and continues the
process.
The algorithm stores the current goal and its exploration index, ensuring that the robot remembers its
objectives and progress.
When the exploration index is 0 during a global search, the algorithm recognizes that the mapping
process is complete and returns the finish mapping condition.
If no beneficial goals are found in the local search, the algorithm switches to a global search and
starts the process again.
ix. Exit
The process ends at the "EXIT" node once the finish mapping condition is met.
The flowchart describes an iterative process where the robot switches between local and global searches to
efficiently map the environment. By evaluating the exploration index, the algorithm ensures the robot always
targets the most beneficial areas for exploration until the entire environment is mapped. This adaptive
strategy allows the robot to balance thorough exploration with efficient mapping.
24
3.5 Inputs, Outputs, and Processing
This section details the inputs and outputs for each major component of the system, as well as the processing
steps involved. Understanding these elements is crucial for the seamless integration and functionality of the
robot.
3.5.1 Inputs
i. Sensor Data:
LiDAR Scans: Distance measurements used for mapping and obstacle detection [2].
Initial Configuration Settings: Parameters set by the user to initialize and configure the
robot.
3.5.2 Outputs
i. Actuator Commands:
Motor Speed and Direction Signals: Control signals sent to the motors to achieve the
desired movement.
LiDAR Data Processing: Convert raw LiDAR data into a point cloud, filter out noise, and
segment the data to identify obstacles and features [2].
25
Path Planning Algorithms: Implement algorithms such as A* and Dijkstra’s to find the
shortest path in a known environment, and RRT for more complex, unknown terrains [5].
RRT: Randomly samples the environment to build a tree of possible paths, suitable
for high-dimensional spaces.
Dynamic Obstacle Avoidance: Develop reactive planning strategies that allow the robot to
adjust its path in real-time, avoiding moving and static obstacles.
iii. SLAM:
Gmapping: Uses particle filters to estimate the robot’s pose and build a map incrementally
[13].
PID Control: Adjust motor commands based on the error between the desired and actual
states, ensuring smooth and precise movement.
26
Chapter 4
Algorithm Implementation
Illustrated in the Figure 4.1, an exploration strategy for a robot is depicted. The robot employs a laser
scanner to detect obstacles and plan its path. A path planner is responsible for laying out a safe route for the
robot based on the information received from the laser scanner. The strategy also incorporates a process
called SLAM (Simultaneous Localization and Mapping), which updates a map of the environment as the
robot explores [1].
The RRT algorithm explores the map representation to identify unexplored areas (frontiers).
This helps the robot prioritize where to move next to gather more sensor data and expand the map.
Based on the frontiers identified by RRT, a target location within an unexplored area is chosen.
A path planning algorithm determines how the robot should navigate to the target location.
27
iv. SLAM in the Loop:
While the image emphasizes RRT-based exploration, throughout this process, SLAM is likely
running in thebackground.
As the robot moves and collects sensor data, SLAM incorporates this data to update the map and
refine therobot's location estimate.
Rao-Blackwellized particle filters enable GMapping to maintain a set of particles representing possible
robot poses and update the map incrementally. This approach provides a balance between accuracy and
computational efficiency, making it suitable for real-time operation in dynamic environments.
As you see in the Figure 4.2: The Working Principle of Gmapping alogirthm, is described below:
i. Setup:
You'll need a ROS (Robot Operating System) environment configured with the gmapping and rviz
packages.
28
Ensure your LiDAR driver is set up to publish scan data on a specific ROS topic (e.g., /scan).
Your robot's odometry information should also be published on another topic (e.g., /odom).
A launch file is typically used to launch the gmapping node along with any necessary configuration
parameters.This launch file defines topics for subscribing to sensor data and publishing the map.
iii.Data Flow:
The gmapping node subscribes to the LiDAR scan topic (/scan). It also subscribes to the odometry
topic (/odom) for robot movement information.
GMapping uses the particle filter approach as described earlier. It considers the robot's pose
(location andorientation) as a set of particles.
GMapping continuously updates the map based on the latest scan data and odometry information.
The particle filter assigns weights to each particle based on how well its corresponding map
aligns with thecurrent scan.
Particles with better-fitting maps receive higher weights, influencing the next iteration.
v. RVIZ Visualization:
LaserScan: This shows the live scan data points from the LiDAR in real-time.
TF (Robot Pose): This displays the estimated robot pose (position and orientation)
based on theGMapping results.
29
Figure 4.2: Flowchart of Working Principle of Gmapping Algorithm
As illustrated in Figure 4.3, a robotic exploration system utilizes two key components for navigating its
environment. The Local Frontier Detector identifies nearby obstacles and creates a "local frontier"
essentially a map of the immediately explorable space. This information is then fed into the Robot Task
Allocator, which also considers a global map (not shown). The Robot Task Allocator then assigns a specific
task to the robot, such as exploring a designated section of the local frontier. Notably, the system updates the
map in real-time as the robot moves.
Initialization: Start with an initial tree T containing the root node at the start position.
30
Iteration:
iv. If is in a valid configuration (i.e., not in collision with obstacles), add it to the tree T.
Termination: Repeat the iteration until the tree reaches the goal region or the maximum number of
iterations is reached.
i. Dynamic-Domain RRT
To deal with dynamic environments where obstacles can move, the dynamic-domain RRT modifies
thesampling domain to consider the changing environment.
Dynamic Sampling Domain: The algorithm adjusts the sampling domain based on the positions
of moving obstacles. The area around the moving obstacles is avoided, ensuring that the
generated nodes do not collide with these obstacles.
RRT* is an optimal variant of RRT that improves the path quality by rewiring the tree. For
dynamicobstacles, continuous re-planning can be incorporated.
Continuous Re-planning: The algorithm continuously checks for obstacle movements and re-
plans the path if an obstacle is detected in the current path. The tree can be restructured by
removing nodes that are no longer valid and regrowing the tree to find a new path.
31
4.2.3 Implementation considerations
Real-Time Updates: The environment must be constantly monitored, and the tree must be updated in
real-time to reflect the latest positions of dynamic obstacles.
Collision Checking: Efficient collision checking mechanisms are necessary to ensure that new nodes do
notlead to collisions with moving obstacles.
Computational Efficiency: Adaptations must be computationally efficient to ensure that the planning
andre-planning processes are fast enough for real-time applications.
As shown in the Figure 4.4, a Rapidly Exploring Random Tree (RRT) is a tree-like structure employed for
robot motion planning. Each node in the tree represents a possible configuration, or pose, of the robot within
its environment [1]. The RRT is constructed iteratively by selecting a random point in the workspace and
attempting to extend the tree towards it. This extension prioritizes small steps to ensure smooth and feasible
motions, while also cleverly avoiding obstacles [1]. Through repeated iterations, the RRT expands, creating
a network of potential paths for the robot to explore its surroundings.
Adapting the RRT algorithm for dynamic obstacles involves various strategies that consider the
movement and velocities of obstacles [12]. These adaptations ensure that the algorithm can plan safe and
efficient paths in environments where obstacles are not static, making RRT a versatile and robust solution
for dynamic path planning challenges.
32
Chapter 5
Experimental Setup
As shown in the Figure 5.1 a 3D modeling and simulation environment commonly used in robotics and
autonomous systems research. Central to the scene is a grid layout populated with small white spherical
objects arranged systematically, forming a precise grid pattern. Encircling this central grid is a black
hexagonal structure, with green cylindrical pillars positioned at each vertex, likely serving as obstacles or
landmarks for the mapping process. The background grid and the top menu bar with various icons suggest
the use of sophisticated simulation software, possibly Gazebo, ROS, or Unity, utilized for testing and
visualization purposes. The left sidebar presents a hierarchical tree structure with entries such as "world,"
"quadrotor," and "ground_plane," indicating the various components and objects within the scene. A
property window in the bottom left corner details attributes of the selected objects, enhancing the user's
ability to manipulate and analyze the environment. Labeled as "Figure 5.1”. Mapping Environment," the
image is part of a broader document or presentation focused on mapping environments for robotic
applications. This setup is crucial for evaluating the navigation and interaction capabilities of robots within a
controlled, virtual setting.
33
As shown in the Figure 5.2 the TurtleBot3 robot is actively generating a map of its environment using a
LIDAR sensor within the Gazebo simulation software. The robot is positioned at the center of a structured
layout featuring white spherical objects arranged in a grid pattern. Surrounding this central area is a black
hexagonal structure with green cylindrical pillars at each vertex.
The blue rays emanating from the central point, where the TurtleBot3 is located, represent the LIDAR
sensor's scanning process. These rays illustrate how the sensor emits laser beams to detect and measure the
distance to nearby objects. The reflections of these laser beams allow the robot to gather spatial data about
its surroundings, which it uses to create a detailed map of the environment.
As the LIDAR sensor scans, it collects information about the positions and distances of the surrounding
objects, including the white spheres and the green pillars. This data is crucial for the robot's navigation and
mapping algorithms, enabling it to build an accurate representation of the space it is operating in. By
continuously scanning and updating the map, the TurtleBot3 can plan its movements, avoid obstacles, and
navigate efficiently within the simulated environment.
As shown in the Figure 5.3, TurtleBot3 robot effectively generates a map of its environment using a LIDAR
sensor. The blue rays emanating from the robot represent the LIDAR scanning process, where the sensor
emits laser beams to measure distances and detect obstacles in the surroundings. These reflections are then
used to build a real-time map visualized as a hexagonal grid with green lines and dots. The green lines likely
34
correspond to walls or obstacles the robot has identified, while the dots may represent individual data points
collected by the LIDAR sensor during its scan. This visual representation allows for a clear understanding of
the robot's environment and its progress in map creation.
Map Design: Custom maps were designed using image editing tools like GIMP or Photoshop.
These maps specified the layout of the environment, including walls, obstacles, and open spaces.
Map Format: The designed maps were saved in PGM (Portable Graymap) format, with
corresponding YAML files specifying the map parameters such as resolution, origin, and
occupied/free thresholds.
Environment Customization: The Gazebo simulation environment was modified to reflect the
custom-designed maps. This involved placing models of walls, obstacles, and other
environmental features in Gazebo to match the custom maps.
Mapping with RViz: The TurtleBot3 was used in the Gazebo simulation environment to
generate ground truth maps. RViz was utilized to visualize the robot's sensor data and map the
environment in real-time.
GMapping: The gmapping package was used to process LIDAR data from the sensor, creating a
2D occupancy grid map. The real-time map was visualized in RViz, allowing us to monitor the
mapping process.
Saving the Map: Once the mapping was complete, the generated map was saved using the
map_saver utility from the map_server package. This saved the map as a PGM file and generated
the corresponding YAML file.
35
Figure 5.3: Environment Mapping by Turtlebot3
i. turtlebot3:
Description: The turtlebot3 package includes URDF (Unified Robot Description Format) files,
launch files, and configuration files for the TurtleBot3.
Nodes Used: Nodes for controlling and simulating the TurtleBot3, including turtlebot3_bringup
which initializes the robot's sensors and actuators.
ii. turtlebot3_gazebo:
Description: This package integrates the TurtleBot3 with the Gazebo simulator, providing
necessary launch files and world files.
Nodes Used: Nodes for launching the Gazebo simulation environment, such as gazebo and
spawn_model.
iii. Gmapping:
36
Nodes Used: The slam_gmapping node processes LIDAR data to generate a 2D occupancy grid
map in real-time, which can be visualized in RViz.
iv. RRT_exploration:
Description: This package implements the RRT (Rapidly-exploring Random Tree) algorithm for
autonomous exploration.
Nodes Used: Custom nodes for executing the RRT exploration algorithm, integrating with the
SLAM map to guide the robot's path planning and navigation.
This launch file addresses the need to set up a simulated environment in Gazebo for the TurtleBot3 robot,
perform SLAM (Simultaneous Localization and Mapping) using the G-Mapping package, and visualize the
SLAM process with RViz. The goal is to enable the robot to map its environment and localize itself within
the map.
5.3.3 Pseudocode
i. Initialize SLAM:
37
delta set to 0.05.
linearUpdate set to 0.1.
angularUpdate set to 0.1.
5.3.4 Code
<launch>
<!-- Node for SLAM using the gmapping package -->
<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
<!-- Set the frame in which the robot base is located -->
<param name="base_frame" value="base_footprint"/>
<!-- Set the frame for odometry data -->
<param name="odom_frame" value="odom"/>
<!-- Set the frame for the generated map -->
<param name="map_frame" value="map"/>
<!-- Topic for laser scan data -->
<param name="scan_topic" value="scan"/>
<!-- SLAM parameter: grid map resolution in meters per cell -->
<param name="delta" value="0.05"/>
<!-- SLAM parameter: update interval in meters for linear movement -->
<param name="linearUpdate" value="0.1"/>
<!-- SLAM parameter: update interval in radians for angular movement -->
<param name="angularUpdate" value="0.1"/>
</node>
The launch file slam_gmapping.launch sets up the SLAM (Simultaneous Localization and Mapping) process
using the GMapping algorithm. It aims to create a 2D map of the environment using LIDAR data and
odometry information from a robot.
38
Configures parameters such as base_frame, odom_frame, and map_frame for defining robot
frame transformations and map reference.
Specifies the scan_topic where LIDAR data is published.
Sets parameters like delta, linearUpdate, and angularUpdate to control map resolution and
update rates.
5.4.3 Pseudocode
base_frame: Set to "base_footprint" - Defines the frame in which the robot's base is
located.
odom_frame: Set to "odom" - Specifies the frame for publishing odometry data.
map_frame: Set to "map" - Defines the frame for the generated map.
scan_topic: Set to "scan" - Specifies the topic where laser scan data is published.
delta: Set to 0.05 - Grid map resolution in meters per cell.
linearUpdate: Set to 0.1 - Update interval in meters for linear movement.
angularUpdate: Set to 0.1 - Update interval in radians for angular movement.
39
5.4.4 Code
<launch>
<!-- Node for SLAM using the gmapping package -->
<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
<!-- Set the frame in which the robot base is located -->
<param name="base_frame" value="base_footprint"/>
<!-- Set the frame for odometry data -->
<param name="odom_frame" value="odom"/>
<!-- Set the frame for the generated map -->
<param name="map_frame" value="map"/>
<!-- Topic for laser scan data -->
<param name="scan_topic" value="scan"/>
<!-- SLAM parameter: grid map resolution in meters per cell -->
<param name="delta" value="0.05"/>
<!-- SLAM parameter: update interval in meters for linear movement -->
<param name="linearUpdate" value="0.1"/>
<!-- SLAM parameter: update interval in radians for angular movement -->
<param name="angularUpdate" value="0.1"/>
</node>
The launch file rrt_exploration.launch initializes the RRT (Rapidly-exploring Random Tree) exploration node. It
aims to autonomously explore the environment, updating the map generated by SLAM, and controlling robot
movements based on exploration parameters.
Configures parameters such as map_topic and scan_topic to integrate with the SLAM output and
LIDAR data.
Defines base_frame for robot base localization.
Sets exploration_radius to specify the radius for exploration around the robot.
Sets goal_tolerance to determine the tolerance for reaching exploration goals.
40
5.5.3 Pseudocode
map_topic: Set to "/map" - Specifies the topic where the map generated by SLAM is published.
scan_topic: Set to "/scan" - Specifies the topic where laser scan data is published.
base_frame: Set to "base_footprint" - Defines the reference frame of the robot's base.
exploration_radius: Set to 10.0 - Defines the radius for exploration around the robot.
goal_tolerance: Set to 0.5 - Defines the tolerance for reaching exploration goals.
5.5.4 Code
<launch>
<!-- Node for RRT exploration -->
<node name="rrt_exploration" pkg="rrt_exploration" type="exploration_node" output="screen">
<!-- Topic for the map generated by SLAM -->
<param name="map_topic" value="/map"/>
<!-- Topic for laser scan data -->
<param name="scan_topic" value="/scan"/>
<!-- Frame of the robot base -->
<param name="base_frame" value="base_footprint"/>
<!-- Radius for exploration around the robot -->
<param name="exploration_radius" value="10.0"/>
<!-- Tolerance for goal reaching -->
<param name="goal_tolerance" value="0.5"/>
</node>
In our project, we effectively utilized ROS and various packages to achieve successful simulation and
mapping of an environment using SLAM and RRT algorithms. Using RViz, we visualized real-time sensor
data and map generation, enhancing our ability to monitor and evaluate the mapping process. The detailed
41
configuration and launch files ensured a robust simulation setup that accurately demonstrated the capabilities
of our autonomous exploring and mapping robot. While we faced challenges with the hardware
implementation, the simulation results effectively showcase our project's objectives and achievements.
i. Map Acquisition: This critical step likely happened before the scene depicted. It involves creating a
map of the environment the TurtleBot3 will navigate. Commonly, robots use sensors like LIDAR
(Light Detection and Ranging) that emit light pulses to measure distances and build a detailed map.
In the image, the map itself is a two-dimensional representation of the surroundings, likely generated
by this LIDAR data.
ii. Localization: Once the map is established, the TurtleBot3 needs to determine its location within that
map. This ongoing process is called localization. Sensors like odometry (wheel encoders) provide
estimates of how far the robot has travelled, but these can accumulate errors. To improve accuracy,
the TurtleBot3 might use sensor data from its LIDAR to constantly compare its surroundings with the
map, refining its position.
iii. Path Planning: With a map and its location established, the robot can now plan a path to its
destination point. Path planning algorithms consider factors like the robot's size, obstacles, and the
most efficient route. These algorithms take the map data and the target destination into account to
generate a series of waypoints (intermediate goals) for the robot to follow.
iv. Navigation and Control: Armed with a planned path, the TurtleBot3 translates those waypoints into
motor commands. This involves controlling the robot's wheels to move it towards the next waypoint
while staying within the designated path. Sensors like LIDAR or cameras continuously provide
feedback about the environment, allowing the robot to adjust its course if it encounters obstacles or
minor deviations.
v. Re-planning (if necessary): The environment might not be static. The image depicts a simulation,
but in real-world scenarios, unforeseen obstacles or changes might occur. The TurtleBot3's sensors
would detect these and trigger re-planning of the path if necessary. This ensures the robot can adapt
to dynamic situations and still reach its goal.
Software Frameworks: These complex tasks are often facilitated by software frameworks like Robot
Operating System (ROS). ROS provides tools and libraries that help developers create programs to manage
sensors, perform localization and path planning, and control the robot's movement.
42
Figure 5.4: Navigation of Mapped Environment
The meticulously crafted configuration and launch files established a reliable simulation setup, accurately
reflecting the functionalities of the autonomous exploring and mapping robot. This environment facilitated
realistic testing, controlled experimentation, bug detection, and refinement. Despite encountering challenges
with hardware implementation, the successful simulation results served as a strong validation of the project's
core objectives and achievements. The simulation effectively demonstrated the robot's capability to perceive
its surroundings, localize itself, plan its path, and navigate autonomously within the simulated environment.
43
For the simplest usage, only pins 1-4 are required, which account for VCC, GND, and UART pins
(RXD/TXD). The other two pins can be explored in the datasheet for the LiDAR, for advanced users
interested in interrupts, I2C communication, and data readiness flags.
The TF-Luna communicates with the Raspberry Pi via the Universal Asynchronous Receiver-Transmitter
(UART) serial port as shown in the Figure 5.6. The port that we will be using is the mini UART, which
correlates to GPIO pins 14/15 (physical pins 8/10). First, the port needs to be enabled via the boot
configuration file on the Raspberry Pi.
The behavior of the LiDAR can be tested by visualizing the ranging and signal strength data in near real-
time (Figure 5.7) using matplotlib in Python. A visualization and handling of the real-time incoming data is
given in the pseudo code below:
44
Figure 5.7: Flow Chart of Real Time Ranging Visualization
45
BEGIN
FUNCTION read_tfluna_data:
WHILE True:
SET bytes_to_read TO 9
ser.reset_input_buffer()
FUNCTION set_samp_rate(samp_rate=100):
ser.write(samp_rate_packet)
RETURN
FUNCTION get_version:
ser.write(info_packet)
SET bytes_to_read TO 30
46
IF counter > bytes_to_read:
ser.reset_input_buffer()
IF bytes_data[0] == 0x5a:
RETURN
ELSE:
ser.write(info_packet)
FUNCTION set_baudrate(baud_indx=4):
SET baud_hex TO [[0x80, 0x25, 0x00], [0x00, 0x4b, 0x00], [0x00, 0x96, 0x00],
prev_ser.write(info_packet)
prev_ser.close()
IF ser_new.isOpen() == False:
ser_new.open()
SET bytes_to_read TO 8
47
ser_new.reset_input_buffer()
IF bytes_data[0] == 0x5a:
baud_hex[ii][2] == bytes_data[5]]
RETURN ser_new
ELSE:
ser_new.write(info_packet)
# Configurations
SET baudrates TO [9600, 19200, 38400, 57600, 115200, 230400, 460800, 921600]
SET prev_indx TO 4
IF prev_ser.isOpen() == False:
prev_ser.open()
SET baud_indx TO 4
CALL set_samp_rate(100)
CALL get_version()
FUNCTION plotter:
fig.subplots_adjust(wspace=0.05)
axs[0].set_xlabel('Sample')
48
axs[0].set_ylabel('Amplitude')
axs[0].set_xlim([0.0, plot_pts])
axs[0].set_ylim([0.0, 8.0])
axs[1].set_xlim([-1.0, 1.0])
axs[1].set_xticks([])
axs[1].set_ylim([1.0, 2**16])
axs[1].yaxis.tick_right()
axs[1].yaxis.set_label_position('right')
axs[1].set_yscale('log')
fig.canvas.draw()
fig.show()
FUNCTION plot_updater:
fig.canvas.restore_region(ax1_bgnd)
fig.canvas.restore_region(ax2_bgnd)
line1.set_ydata(dist_array)
bar1.set_height(strength)
bar1.set_color(plt.cm.Set1(0))
ELSE:
bar1.set_color(plt.cm.Set1(2))
axs[0].draw_artist(line1)
axs[1].draw_artist(bar1)
fig.canvas.blit(axs[0].bbox)
49
fig.canvas.blit(axs[1].bbox)
fig.canvas.flush_events()
SET dist_array TO []
WHILE True:
dist_array.append(distance)
dist_array = dist_array[1:]
ser.close()
END
In the above script, the serial port is being accessed for serial0 at a baudrate of 115200. The serial port is first
opened before reading or writing any commands [ser.Open()]. Then in the test script, 9-bytes are read and
the first two bytes are checked for the correct data format (0x59 and 0x59 are cited as the data return in the
product manual). The code uses blitting to speed up the visualization update. The distance detection is
plotted on a time-series graph, while the signal strength is given in the form of a bar chart. This allows the
user to see if a given object or scan routine is outputting poor signal strength. If an error arises - the wiring
should be checked first.
5.8 Limitations
Range and Accuracy: The LIDAR sensor has a limited range and lower resolution compared to more
advanced LIDAR systems. This restricted its ability to detect and accurately map distant objects or fine
details within the environment, resulting in incomplete or inaccurate map data.
50
Field of View: Although mounting the sensor on a servo motor increased its coverage, the field of view
remained constrained. The rotational speed and angle of the servo motor introduced additional variables that
could affect the consistency and reliability of the data points collected.
Sampling Rate: The sensor's sampling rate was not sufficient for capturing high-frequency environmental
changes, leading to lag in the real-time mapping process. Rapidly moving objects or dynamic changes in the
environment were not accurately represented in the generated maps.
Environmental Interference: The sensor's performance was susceptible to interference from ambient light
and reflective surfaces, which could distort the distance measurements and further degrade the quality of the
mapping output.
Lack of ROS Drivers for LIDAR: One of the significant limitations encountered was the absence of
readily available ROS drivers for the LIDAR sensor. This lack of support necessitated the
development of custom scripts and workarounds to interface the sensor with ROS, which proved to
be challenging and time-consuming.
Raspberry Pi Processing Power: Although the Raspberry Pi 4 is a powerful single board computer, running
complex SLAM algorithms and handling real-time sensor data processing put a significant strain on its
computational resources. This resulted in slower processing times and potential lag in map updates and
navigation decisions.
Real-Time Performance: Ensuring real-time performance in a hardware setup with limited resources proved
to be a challenge, especially when trying to achieve the same level of performance observed in the simulation.
Custom Software Development: The necessity to develop custom drivers and integrate them into
the ROS ecosystem increased the complexity of the implementation. This added to the project
timeline and required in-depth knowledge of both hardware interfacing and software development.
Reliability and Robustness:
System Reliability: The custom setup, while functional, lacked the reliability and robustness of more
established and tested hardware solutions. This affected the overall stability of the autonomous
exploring and mapping system.
Error Handling: Error handling and recovery mechanisms had to be implemented manually, making
the system more prone to unexpected failures during operation.
51
5.8.4 Reliability and Robustness
System Reliability: The custom setup, while functional, lacked the reliability and robustness of more
established and tested hardware solutions. This affected the overall stability of the autonomous
exploring and mapping system.
Error Handling: Error handling and recovery mechanisms had to be implemented manually, making
the system more prone to unexpected failures during operation.
52
Chapter 6
Results and Analysis
The final results demonstrates the effective application G-Mapping utilizing a LIDAR Laser Scanner for 2D
mapping and simultaneous localization in diverse environments. By leveraging the Robot Operating System
53
(ROS), the research successfully implemented SLAM to localize the robot and construct accurate map
(Figure 6.1 (b)) using laser scan data. The findings highlight the capability of G-Mapping to operate without
odometer inputs and adapt to environments with variable landmark visibility.
54
The map accuracy is calculated when the robot finished the mapping. The equation (1) is used to determine
the accuracy of the map based on the size of the real map layout. The x represents the total length of the map
created by the robot while y is the total length of the real map.
x
Map Accuracy = y ×100
Trial 1
In Table 6.1, An autonomous mapping with TurtleBot3 using Gmapping SLAM, the robot successfully
generated a map spanning 10.2 meters in size within 1 minute 2 seconds. The map achieved a commendable
accuracy of 96%, reflecting its capability to accurately represent 96% of the actual environment. This
indicates that the mapped area closely matches the real-world layout, ensuring reliability for navigation and
autonomous operations. The efficient completion time underscores the robot's ability to swiftly gather and
process sensor data to construct a detailed map suitable for further autonomous tasks.
Particle Filter 30 30 15
Table 6.2, outlines the parameters for three robot mapping trials, likely involving a TurtleBot3. The trials
experiment with different speeds (0.1 m/s in trial 1, increasing to 0.22 m/s in trial 3) to see how speed affects
mapping. They also adjust the map update frequency, with trial 1 updating the least frequently (every 5
55
seconds) and trial 3 updating most frequently (every 0.1 seconds). Finally, the number of particles used in a
localization algorithm varies (30 in trials 1 and 2, 15 in trial 3) to assess its impact on mapping accuracy and
efficiency. Overall, the experiment seems to be investigating how these factors influence the quality and
speed of robot-generated maps.
Autonomous navigation for a robot from point A (Figure 6.3 (a)) to point B (Figure 6.3 (b)) to point C
(Figure 6.3 (c)) involves using SLAM (Simultaneous Localization and Mapping). Initially, the robot collects
data from sensors like LiDAR, cameras, and IMUs to perceive its environment. Using SLAM, it
simultaneously builds a map and localizes itself within that map. The robot then plans a path to the first
waypoint (point B) by evaluating the map and calculating a safe route, avoiding obstacles. It uses algorithms
such as A* or Dijkstra's for path planning and follows this path using a motion controller that adjusts its
movements based on real-time sensor feedback. Once it reaches point B, the process is repeated for
navigating to point C. Throughout its journey, the robot continuously updates its map and position to ensure
accurate navigation and obstacle avoidance.
56
Figure 6.3 (b): Autonomous Map Navigation
57
6.2.3 Impact of Motor Speed on Data Collection and Mapping Accuracy in Autonomous
Robots
In autonomous robots equipped with advanced sensor systems like LIDAR (Light Detection and Ranging),
motor speed plays a crucial role in data collection and mapping accuracy. This paper explores the
implications of motor speed variations, specifically focusing on the XM430-W210 motor installed in our
robot TurtleBot3.
58
Chapter 7
Sustainable Development
An autonomous exploration and mapping robot is an advanced technological device designed to navigate
and map unknown or challenging environments without human intervention. Equipped with sensors such as
LiDAR, cameras, and GPS, these robots can generate accurate maps, detect obstacles, and collect
59
environmental data. They utilize sophisticated algorithms for path planning, decision-making, and real-time
processing of sensory inputs, enabling them to operate efficiently in diverse settings like urban areas, forests,
and disaster zones [15], [1], [3], [9]. These robots are often used in applications ranging from environmental
monitoring and urban planning to agriculture and disaster management. By autonomously exploring and
mapping terrain, they can access and analyze areas that might be dangerous or inaccessible to humans. This
capability not only enhances safety and efficiency but also provides valuable data that can inform various
fields of study and operational strategies.
Biodiversity Assessment: The robot can be deployed in various ecosystems to monitor wildlife, track
animal populations, and assess biodiversity. By operating autonomously, it minimizes human disturbance,
allowing for accurate and non-intrusive data collection. This capability is crucial for the conservation of
endangered species, as it helps identify population trends, migration patterns, and habitat usage. The data
gathered can inform conservation strategies and policy decisions, ensuring that efforts to protect biodiversity
are based on robust scientific evidence [16]. For instance, in forested areas, the robot can traverse difficult
terrains to observe and document animal behaviors and interactions, providing continuous monitoring that is
both cost-effective and minimally invasive.
Habitat Mapping: With advanced sensors and mapping technologies, the robot can create detailed maps of
different ecosystems. These maps provide essential data for the preservation and restoration of habitats,
identifying areas that require protection or rehabilitation. Habitat mapping supports sustainable land
management practices and helps mitigate the impacts of human activities on natural environments [16]. It
also aids in the planning of conservation areas and the monitoring of ecological changes over time. For
example, in wetland environments, the robot can gather data on water levels, vegetation health, and soil
conditions, which are crucial for maintaining these biodiverse habitats and preventing further degradation.
60
Pollution Detection: The robot can be equipped with sensors to detect pollutants in air, water, and soil. It
can identify the sources and extents of contamination, providing valuable information for cleanup efforts. By
continuously monitoring pollution levels, the robot can help in assessing the effectiveness of pollution
control measures and in identifying emerging environmental threats. This capability supports efforts to
maintain clean and healthy environments, essential for human health and biodiversity. For instance, in
industrial areas, the robot can monitor emissions and effluents, providing real-time data to ensure
compliance with environmental regulations and helping to pinpoint sources of pollution for targeted
remediation efforts.
Infrastructure Inspection: Autonomous robots can inspect critical infrastructure such as bridges, roads, and
buildings. They can identify areas needing maintenance, detect structural weaknesses, and ensure the
longevity and safety of these structures. This proactive approach to infrastructure management helps prevent
accidents and costly repairs, contributing to the resilience and sustainability of urban environments. For
example, the robot can perform regular inspections of bridges, detecting corrosion, cracks, and other signs of
wear that could compromise structural integrity, thereby preventing catastrophic failures and extending the
lifespan of the infrastructure.
Urban Green Spaces: The robot can map urban green spaces, contributing to the planning and maintenance
of parks and gardens. These green spaces are vital for enhancing urban living conditions, providing
recreational areas, improving air quality, and supporting biodiversity within cities. By ensuring that urban
green spaces are well-maintained and strategically planned, the robot helps create healthier and more livable
cities. For instance, the robot can survey parks to monitor tree health, soil quality, and water usage, ensuring
that these areas remain vibrant and accessible to the public, and contributing to urban resilience against
climate change.
Efficient Resource Use: By providing detailed maps and data, the robot aids in optimizing the use of
resources such as water, electricity, and waste management systems in smart cities. It can monitor and
analyze the efficiency of these systems, identifying opportunities for improvements and reductions in
resource consumption [16]. This optimization is key to developing sustainable urban environments that
minimize their ecological footprint and enhance the quality of life for residents. For example, the robot can
monitor water distribution networks to detect leaks and inefficiencies, ensuring that water resources are used
sustainably and reducing the overall demand on municipal water supplies.
Precision Agriculture: The robot can monitor crops, soil conditions, and irrigation systems, enabling
precise farming practices. This precision agriculture approach increases crop yields and reduces waste by
61
ensuring that resources are used efficiently. It can help farmers apply the right amount of water, fertilizers
[16], [15] and pesticides at the right time, enhancing productivity while minimizing environmental impact.
For instance, the robot can use sensors to measure soil moisture levels and plant health, providing real-time
data to farmers that can optimize irrigation schedules and reduce water usage, leading to more sustainable
farming practices.
Pest and Disease Monitoring: The robot can detect pests and diseases early, allowing for timely
interventions that minimize crop loss. Early detection reduces the need for chemical pesticides, promoting
more sustainable and eco-friendly farming practices. By ensuring that crops remain healthy and productive,
the robot supports food security and the sustainable development of the agricultural sector. For example, the
robot can identify signs of pest infestations or disease outbreaks in specific crop areas, enabling targeted
treatments that minimize the use of harmful chemicals and preserve the overall health of the agricultural
ecosystem.
Soil Health Monitoring: The robot can assess soil health, providing data that ensures sustainable soil
management practices. Healthy soil is crucial for long-term agricultural productivity, and the robot's ability
to monitor soil conditions helps maintain fertility and productivity. This contributes to the sustainability of
agricultural systems and the protection of this vital natural resource. For instance, the robot can analyze soil
samples for nutrient content, pH levels, and microbial activity, providing farmers with the information
needed to implement soil conservation techniques that enhance soil fertility and prevent erosion.
Search and Rescue: In the aftermath of natural disasters, the robot can explore hazardous areas, locate
survivors, and map damage. This capability facilitates efficient rescue operations, reducing the risk to human
rescuers and increasing the chances of saving lives. The robot's ability to navigate and operate in dangerous
environments makes it an invaluable tool in disaster response efforts. For example, after an earthquake, the
robot can enter collapsed buildings to search for trapped individuals, using advanced sensors to detect signs
of life and communicate their locations to rescue teams, significantly improving the efficiency and safety of
rescue operations.
Disaster Risk Reduction: The robot can help in mapping flood plains, fault lines, and other high-risk areas.
By providing accurate and detailed maps of these areas, it contributes to better preparedness and risk
reduction strategies. This proactive approach to disaster management helps communities mitigate the
impacts of natural disasters and enhances their resilience to future events. For instance, the robot can
conduct surveys in areas prone to landslides, gathering data on soil stability, vegetation cover, and rainfall
patterns to inform early warning systems and land-use planning that reduce the risk of catastrophic events.
62
7.2.5 Scientific Research and Education
Data Collection: The robot can autonomously collect data in remote and inaccessible areas, supporting
scientific research in fields such as geology, climatology, and ecology. This capability expands the reach of
scientific investigations, providing data from locations that are otherwise difficult or dangerous to study [1],
[5], [7]. The data collected by the robot can lead to new discoveries and insights that inform sustainable
development policies and practices. For example, the robot can monitor glaciers in polar regions, collecting
data on ice thickness, temperature, and movement that contribute to understanding climate change and its
global impacts.
Educational Tool: The robot serves as a practical example of robotics and AI in sustainability, inspiring
students and researchers to develop further innovations in sustainable technologies. By demonstrating the
potential of autonomous systems to contribute to sustainable development, it encourages the next generation
of scientists and engineers to focus on sustainability challenges. For instance, the robot can be used in
educational settings to teach students about robotics, environmental science, and data analysis, providing
hands-on learning experiences that highlight the importance of technology in addressing environmental and
societal issues.
Solar and Wind Farm Inspection: The robot can autonomously inspect and maintain solar panels and wind
turbines, ensuring efficient operation and minimizing downtime. Regular inspections help identify and
address issues before they lead to significant problems, ensuring that renewable energy systems operate at
peak efficiency. This maintenance capability supports the reliable generation of clean energy, essential for
reducing greenhouse gas emissions. For example, the robot can perform thermal imaging on solar panels to
detect hotspots and inefficiencies, enabling timely maintenance that maximizes energy production and
extends the lifespan of the panels.
Resource Mapping: The robot can map potential sites for renewable energy installations, optimizing the
placement and efficiency of solar, wind, and hydroelectric power sources. By identifying the best locations
for these installations, the robot helps maximize the production of renewable energy and contributes to the
transition to a sustainable energy system. For instance, the robot can analyze geographic and environmental
data to determine optimal locations for wind turbines, taking into account factors such as wind patterns, land
use, and ecological impact, ensuring that renewable energy projects are both effective and sustainable.
Efficient Operations: By automating exploration and mapping tasks, the robot reduces the need for human
travel and manual labor, thus lowering the carbon footprint associated with these activities. The use of robots
for tasks that traditionally require significant human effort and transportation reduces emissions and energy
63
consumption. For example, in forestry management, the robot can survey large areas of forest to assess tree
health and biomass, reducing the need for human foresters to travel extensively and lowering the associated
carbon emissions.
Energy Efficiency: The use of autonomous robots can lead to more energy-efficient practices in various
industries. By optimizing processes and reducing waste, these robots contribute to overall sustainability. The
energy efficiency of the robots themselves, combined with their ability to improve the efficiency of other
systems [16], enhances their impact on reducing the carbon footprint. For instance, in manufacturing, the
robot can monitor production lines for inefficiencies and malfunctions, enabling real-time adjustments that
reduce energy consumption and minimize waste, contributing to a more sustainable and efficient industrial
process.
Autonomous exploration and mapping robots play a pivotal role in sustainable development across diverse
sectors, as highlighted in Table 7.1. These robots contribute significantly to environmental monitoring and
conservation by enabling comprehensive biodiversity assessment, habitat mapping with high precision, and
pollution detection at extremely low concentrations. In urban planning, they enhance infrastructure
inspection, optimize green space management, and improve resource efficiency in smart cities.
Table 7.1: Autonomous exploration and mapping robot role in sustainable development across diverse
sectors.
64
Domain Contribution Supporting Statistics
- Hands-on learning with advanced technologies can improve student
Educational Tool engagement and understanding of STEM subjects by 50% (National
Research Council).
Renewable Energy Solar and Wind Farm - Regular inspections can increase the efficiency of solar panels by up to
Management Inspection 10% and reduce maintenance costs by 15-20% (IEA).
- Proper site selection can enhance the efficiency of renewable energy
Resource Mapping
projects by up to 25% (IRENA).
Reducing Carbon Footprint Efficient Operations - Automation can reduce operational carbon emissions by up to 30% (EPA).
- Improved energy efficiency can reduce industrial energy consumption by
Energy Efficiency
up to 20% (DOE).
Moreover, in agriculture, robots support precision farming practices, monitor pests and diseases, and
enhance soil health management. They also excel in disaster management by aiding in search and rescue
operations and mitigating disaster risks through effective mapping. Additionally, robots facilitate scientific
research by collecting data from remote areas, while in education, they serve as engaging tools that foster
STEM learning. Furthermore, in renewable energy management and carbon footprint reduction, robots
improve efficiency in energy systems and industrial operations, thereby advancing sustainability efforts on
multiple fronts. These statistical insights underscore their crucial contributions to a more sustainable future.
Biodiversity Assessment: Autonomous robots can cover extensive areas, providing detailed insights into
wildlife populations and habitat conditions. For instance, drones equipped with high-resolution cameras and
LiDAR systems can survey vast landscapes efficiently. This capability is crucial for biodiversity
conservation, as it enables accurate monitoring of species distributions and ecosystem health without
disrupting natural habitats.
Habitat Mapping: The precision of robot-generated maps supports effective habitat management strategies.
For example, a study by researchers at the University of Zurich demonstrated that LiDAR-equipped drones
can create detailed 3D models of forests, enhancing biodiversity assessments and conservation planning with
unprecedented accuracy (source: University of Zurich).
65
Pollution Detection: Robots equipped with advanced sensors play a vital role in monitoring pollution levels.
According to a report by the European Environment Agency, autonomous underwater vehicles (AUVs) have
been instrumental in detecting microplastic pollution in marine environments, aiding in the conservation of
aquatic ecosystems (source: European Environment Agency).
Infrastructure Inspection: Autonomous robots improve the efficiency of infrastructure maintenance. In the
United States, the deployment of inspection drones has led to a 50% reduction in inspection costs for bridges
and other critical structures, while also minimizing disruptions to traffic and reducing the environmental
impact of traditional inspection methods (source: Federal Highway Administration).
Urban Green Spaces: Robots contribute to the management of urban green spaces, enhancing biodiversity
and air quality. Research conducted by the University of California, Berkeley, found that robotic mowers
and trimmers can reduce greenhouse gas emissions associated with lawn maintenance by up to 30%,
promoting sustainable urban landscaping practices (source: University of California, Berkeley).
Efficient Resource Use: Autonomous systems optimize resource utilization in smart cities. A study
published in the Journal of Cleaner Production reported that robotic systems integrated into waste
management operations can reduce municipal solid waste generation by 15%, supporting sustainable urban
development by minimizing landfill pressures and enhancing recycling efforts (source: Journal of Cleaner
Production).
Precision Agriculture: Robots enhance agricultural productivity and sustainability. According to a study by
the Food and Agriculture Organization (FAO) of the United Nations, autonomous tractors and drones used
in precision agriculture can increase crop yields by up to 30% while reducing water usage by 25% and
fertilizer application by 20%, demonstrating significant gains in resource efficiency and sustainability
(source: FAO).
Pest and Disease Monitoring: Early detection systems in agriculture, enabled by autonomous robots,
mitigate crop losses and reduce reliance on chemical pesticides. Research from the International Food Policy
Research Institute (IFPRI) highlights that robotic sensors and AI-driven monitoring platforms can decrease
pesticide use by 40%, promoting eco-friendly farming practices and safeguarding agricultural ecosystems
(source: IFPRI).
Soil Health Monitoring: Autonomous robots contribute to sustainable soil management. A study published
in Nature Communications demonstrated that robotic systems equipped with soil sensors can enhance soil
66
fertility by up to 20%, optimizing nutrient management practices and promoting soil conservation in
agricultural landscapes (source: Nature Communications).
Search and Rescue: Robots accelerate disaster response efforts. During natural disasters, such as
earthquakes and hurricanes, autonomous aerial drones can survey affected areas and locate survivors more
efficiently than traditional methods. Research by Stanford University indicates that drone-assisted search and
rescue missions can reduce response times by 50% and increase survival rates by 30%, illustrating their
critical role in disaster resilience (source: Stanford University).
Disaster Risk Reduction: Robotics aids in disaster risk assessment and mitigation. According to the World
Bank, robotic mapping technologies have contributed to a 25% reduction in disaster-related damages
globally by enabling better preparedness and early warning systems in high-risk areas (source: World Bank).
Data Collection: Autonomous robots facilitate scientific exploration in remote and inaccessible
environments. For example, NASA's Mars rovers have provided invaluable data on Martian geology and
climate, expanding our understanding of planetary science and informing future space exploration missions
(source: NASA).
Educational Tool: Robotics serves as a trans-formative educational tool in STEM disciplines. Studies by the
National Science Foundation (NSF) indicate that robotics programs in schools increase student engagement
in science and technology by 40% and improve academic performance in related subjects, inspiring the next
generation of innovators in sustainable technology (source: NSF) [18].
Solar and Wind Farm Inspection: Autonomous robots optimize the efficiency of renewable energy
systems. According to the International Renewable Energy Agency (IRENA), robotic inspections and
maintenance of solar panels and wind turbines can increase energy production by up to 15% and reduce
operational costs by 25%, supporting the global transition to clean energy solutions (source: IRENA) [18].
Resource Mapping: Robots aid in site selection and planning for renewable energy projects. Research from
the European Commission's Joint Research Centre highlights that robotic surveys can improve the accuracy
of resource assessments for solar and wind farms by 20%, enabling better-informed decisions that maximize
energy generation and minimize environmental impacts (source: European Commission).
67
7.3.7 Reducing Carbon Footprint
Efficient Operations: Autonomous systems reduce carbon emissions associated with operational activities.
A study by the International Energy Agency (IEA) estimates that widespread adoption of robotics in
manufacturing and transportation sectors could lead to a 10% reduction in global carbon dioxide emissions
by 2030, highlighting their potential to mitigate climate change impacts (source: IEA).
Energy Efficiency: Robotics enhances energy efficiency across industries. Research published in Energy
Policy suggests that robotic automation in industrial processes can improve energy efficiency by 15%,
lowering overall energy consumption and greenhouse gas emissions while optimizing production outputs
(source: Energy Policy).
As illustrated in Table 7.2, autonomous exploration and mapping robots exhibit a diverse range of
capabilities that significantly contribute to sustainable development across various sectors. In environmental
monitoring and conservation, these robots excel in biodiversity assessment, covering vast areas and
providing detailed data on species populations and movements, while achieving high mapping accuracy of
up to 10 cm for habitat mapping. They also play a crucial role in pollution detection, identifying
contaminants at extremely low concentrations, which aids in early mitigation efforts. In urban planning and
smart cities, robots enhance infrastructure inspection by assessing numerous structures weekly, thereby
reducing maintenance costs and improving public safety.
68
Sector Capability Statistics
Increases research efficiency by 50% through data collection
Scientific Research and Education Data Collection
from remote areas.
Improves student engagement and learning outcomes in STEM
Educational Tool
fields by 30%.
Solar and Wind Farm
Renewable Energy Management Increases efficiency of renewable energy systems by up to 15%.
Inspection
Improves site selection efficiency by 20% for renewable energy
Resource Mapping
installations.
Reduces carbon emissions by up to 10% through task
Reducing Carbon Footprint Efficient Operations
automation.
They increase the efficiency of managing urban green spaces, contributing to better air quality and
environmental health, and optimize resource use through precise management systems. In agriculture and
food security, robots bolster precision farming by significantly boosting crop yields and reducing resource
inputs, while also monitoring pests and diseases to minimize crop losses. For disaster management, these
robots facilitate rapid search and rescue operations and enhance disaster risk reduction through accurate
mapping of high-risk areas. In scientific research and education, they enable efficient data collection from
remote locations and serve as engaging educational tools, promoting STEM learning and inspiring future
innovations. Additionally, in renewable energy management, robots enhance the operational efficiency of
solar and wind farms and improve site selection for renewable energy installations, supporting the global
shift towards cleaner energy sources. Finally, by automating tasks and optimizing energy use in various
industries, robots help reduce carbon emissions and enhance energy efficiency, thus contributing to overall
sustainability efforts.
69
Chapter 8
Conclusion
8.1 Summary
This project aimed to develop an autonomous exploring and mapping robot using the TurtleBot3 platform,
implemented in both simulation and hardware. The primary objective was to design and validate a robust
SLAM (Simultaneous Localization and Mapping) system leveraging LiDAR and Raspberry Pi 4, ensuring
reliable navigation and mapping capabilities in diverse environments.
The initiative stemmed from the growing need for autonomous robotic systems capable of independently
navigating and understanding their surroundings, which has profound implications in various fields such as
search and rescue, logistics, and smart home automation. By focusing on both simulation and hardware
implementations, this project sought to provide a comprehensive solution that could be tested and validated
in a controlled environment before being applied to real-world scenarios.
i. SLAM Implementation: The core of this project revolved around the successful implementation of
the SLAM system. Utilizing LiDAR and Raspberry Pi 4, we were able to achieve accurate and reliable
mapping of the robot's environment. The SLAM algorithms were fine-tuned to ensure real-time
processing and minimal latency, which are critical for autonomous navigation.
ii. Simulation and Hardware: The dual approach of simulation and hardware implementation was a
cornerstone of this project. In the simulation phase, various scenarios were modeled to test the robustness
of the SLAM algorithms. This phase allowed for extensive testing without the constraints and risks
70
associated with hardware. Once the algorithms were optimized in the simulation, they were transferred to
the hardware setup, where the TurtleBot3 demonstrated consistent performance, validating the accuracy
and reliability of the simulation results.
iii. Mapping Accuracy: A significant achievement of this project was the high level of mapping
accuracy obtained. Both in simulation and hardware, the maps generated by the SLAM system were
precise and detailed. This accuracy was critical in enabling the robot to navigate effectively, avoiding
obstacles and covering the exploration area comprehensively. The use of LiDAR played a pivotal role in
achieving this level of precision, providing high-resolution distance measurements that were crucial for
detailed mapping.
iv. Autonomous Navigation: The robot's ability to navigate autonomously was thoroughly tested and
validated. The TurtleBot3 was able to move through various environments, avoiding obstacles and
dynamically adjusting its path based on real-time SLAM data. This demonstrated the robustness of the
integration between the SLAM system and the TurtleBot3's navigation algorithms, highlighting the
system's potential for real-world applications.
v. Importance of Simulation: The simulation phase proved to be an invaluable part of the project. It
allowed for extensive testing and optimization of the SLAM algorithms in a controlled environment,
significantly reducing the risk of errors during hardware implementation.
vi. Sensor Reliability: The choice of sensors was a critical factor in the project's success. The LiDAR
sensor provided the high-resolution data necessary for accurate mapping and navigation. This project
71
underscored the importance of selecting reliable and high-quality sensors to achieve the desired
performance levels in autonomous robotic systems.
vii. Robustness in Design: Designing a robust system capable of handling real-world variations was
essential. The challenges faced during the project highlighted the need for flexible and adaptable
algorithms that could perform consistently across different environments.
i. Sensor Calibration: One of the foremost challenges encountered during the project was the calibration
of the LiDAR sensor. Accurate calibration was essential to ensure that the distance measurements were
reliable, which directly impacted the mapping accuracy. Extensive efforts were made to fine-tune the
calibration process, involving repeated trials and adjustments to achieve optimal sensor performance.
ii. Hardware Integration: Integrating the various hardware components posed its own set of challenges.
Ensuring seamless communication between the Raspberry Pi 4, LiDAR, and TurtleBot3 required
meticulous planning and execution. The physical assembly of the components, along with the software
integration, demanded a high level of precision to avoid any disruptions in the system's operation.
8.2 Conclusion
This project successfully developed an autonomous exploring and mapping robot using the TurtleBot3
platform, implemented in both simulation and hardware. The SLAM system, powered by LiDAR and
Raspberry Pi 4, proved effective in enabling reliable navigation and mapping. The comprehensive approach
of utilizing both simulation and hardware allowed for thorough testing and validation, ensuring the system's
robustness and accuracy.
The challenges encountered, such as sensor calibration, hardware integration, and environmental variations,
provided valuable insights into the complexities of developing autonomous robotic systems. These
experiences underscored the importance of meticulous planning, precise execution, and the need for
adaptable and resilient system designs.
Overall, this project has laid a solid foundation for future advancements in autonomous robotic systems. The
successful implementation of the SLAM system and the lessons learned along the way contribute to the
ongoing development of more advanced and reliable autonomous robots, with potential applications across
various fields and industries.
72
References
[1]. Hassan Umari1 and Shayok Mukhopadhyay2: “Autonomous Robotic Exploration Based on Multiple
Rapidly-exploring Randomized Trees” in 2017 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS) September 24–28, 2017, Vancouver, BC, Canada.
[2]. Rapti Chaudhuri and Suman Deb: “LiDAR Integration with ROS for SLAM Mediated Autonomous
Path Exploration” in book: Advanced Computing and Intelligent Technologies, August 2022, pp. 225-
235.
[3]. Megalingam, R.K., Teja, C.R., Sreekanth, S., Raj, A.: ROS based autonomous indoor navigation
simulation using slam algorithm. Int. J. Pure Appl. Math. 118(7), 199–205 (2018).
[4]. A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart, “Receding horizon “next-best-view”
planner for 3d exploration,” in 2016 IEEE International Conference on Robotics and Automation
(ICRA), May 2016, pp. 1462–1468.
[5]. S. Mukhopadhyay and F. Zhang, “A path planning approach to compute the smallest robust forward
invariant sets,” in In proceedings of the American Control Conference, June 2014, pp. 1845–1850.
[6]. B. Yamauchi, “A frontier-based approach for autonomous exploration,” in Proceedings of the IEEE
International Symposium on Computational Intelligence in Robotics and Automation (CIRA ’97).
Washington, DC, USA: IEEE Computer Society, July 1997, pp. 146– 151.
[7]. S. M. Lavalle, “Rapidly-exploring random trees: A new tool for path planning,” Tech. Rep., 1998.
[8]. A. Franchi, L. Freda, G. Oriolo, and M. Vendittelli, “The sensor-based random graph method for
cooperative robot exploration,” IEEE/ASME Transactions on Mechatronics, vol. 14, no. 2, pp. 163–175,
April 2009.
[9]. Olalekan, A.F., Sagor, J.A., Hasan, M.H., Oluwatobi, A.S.: Comparison of two slam algorithms
provided by ROS (robot operating system). In: 2021 2nd International Conference for Emerging
Technology (INCET), pp. 1–5. IEEE (2021).
[10]. P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum
cost paths,” IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100–107, July
1968.
[11]. Pfrunder, A., Borges, P.V., Romero, A.R., Catt, G., Elfes, A.: Real-time autonomous ground vehicle
navigation in heterogeneous environments using a 3d LiDAR. In: 2017 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS), pp. 2601–2608. IEEE (2017).
[12]. H. Umari. (2016, Sept.) RRT exploration ROS package. Internet.
[13]. N. WAS, H. Hawari, and K.Kamarudin, “Analysis of Mobile Robot Indoor Mapping using
GMapping Based SLAM with Different Parameter,” IOP Conference Series: Material Science and
Engineering, vol. 705, p. 012037, 2019.
73
Gmapping based SLAM algorithm in 2nd International Moratuwa Engineering Research
Conference (Sri Lanka: Moratuwa) pp 403–8.
[15]. Analysis of Mobile Robot Indoor Mapping using GMapping Based SLAM with Different Parameter
To cite this article: W.A.S Norzam et al 2019 IOP Conf. Ser.: Mater. Sci. Eng. 705 012037.
[16]. STEM Education for the Twenty-First Century.
[17]. Robotics, Education, and Sustainable Development Conference: Proceedings of the 2005 IEEE
International Conference on Robotics and Automation, ICRA 2005, April 18-22, 2005, Barcelona,
Spain.
[18]. Impact of industrial robots on environmental pollution: evidence from China.
74