Fyp Report (Aemr)
Fyp Report (Aemr)
Mudassar Hussain
EE-2030-0142
Ahsan Mansha
EE-2030-0123
Syed Mujahid Bin Nauman
EE-2030-0068
Supervisor:
Dr. Abdul Khaliq
Co-Supervisor:
Dr. Yasir
_________________________ _______________________
_____________________________
Fa-2020/B.Sc-EE/2030-0123
Ahsan Mansha
Supervisor: Head
Dr Abdul Khaliq Department of Electrical and
Professor Computer Engineering
II
Dedication
We dedicate our final year project, "Autonomous Exploring and Mapping Robot," to our family and many
friends. A special feeling of gratitude and respect goes to our loving parents, whose words of encouragement
and push for tenacity ring in our ears.
We also dedicate this project to our many friends, our supervisor, Co-supervisor, advisor, and other faculty
members who have supported us throughout this journey. Your guidance, support, and belief in our abilities
have been instrumental in making this project a reality.
III
Acknowledgments
We would like to express our deepest gratitude to everyone who has contributed to the successful
completion of our final year project, "Autonomous Exploring and Mapping Robot."
First and foremost, we extend our heartfelt thanks to our supervisor, Dr. Abdul Khaliq, for their invaluable
guidance, insightful feedback, and unwavering support throughout this project. Your expertise and
encouragement have been pivotal in shaping our work.
We are profoundly grateful to our co-supervisor, Dr. Yasir, whose advice and direction have been
instrumental in overcoming numerous challenges. Your mentor-ship has been a cornerstone of our success.
We also wish to thank our families for their unending love, patience, and encouragement. Your belief in us
has been a constant source of motivation.
Finally, to our friends and peers, thank you for your camaraderie, assistance, and the countless discussions
that have enriched our understanding and made this journey memorable.
This project is a collective achievement, and we are deeply appreciative of everyone who has contributed to
it. Thank you.
Mudassar Hussain
Ahsan Mansha
Syed Mujahid Bin Nauman
IV
Abstract
The "Autonomous Exploring and Mapping Robot" project focuses on developing a sophisticated robotic
system designed to navigate and map unknown environments autonomously. Utilizing the advanced sensor
LiDAR the robot can perceive and interpret its surroundings with high precision. Central to its functionality
is the implementation of simultaneous localization and mapping (SLAM) algorithms, which enable the robot
to construct and update maps in real-time while accurately tracking its position within these maps. This
capability is critical for ensuring the robot's effective navigation and situational awareness.
To enhance its autonomous navigation, the robot employs machine learning techniques for path planning and
obstacle avoidance. These techniques allow the robot to dynamically adapt to changes in the environment,
making decisions on the ground to avoid obstacles and select optimal paths. The integration of these
technologies ensures that the robot can explore and map complex environments efficiently and reliably.
The project's ultimate aim is to create a versatile and robust robotic platform that can be deployed in various
applications. Potential uses include search and rescue operations, where the robot can quickly and safely
explore hazardous areas; environmental monitoring, where it can gather data in remote or dangerous
locations; and industrial automation, where it can navigate and operate within dynamic and unpredictable
settings. By achieving significant advancements in autonomous navigation and environmental mapping, this
project aspires to contribute to the broader field of robotics, enhancing the capabilities and applications of
autonomous systems in real-world scenarios.
Mudassar Hussain
Ahsan Mansha
Syed Mujahid Bin Nauman
V
Table of Contents
Declaration .............................................................................................................................................................................................I
Final Approval ......................................................................................................................................................................................II
Dedication ........................................................................................................................................................................................... III
Acknowledgments ...............................................................................................................................................................................IV
Abstract ................................................................................................................................................................................................ V
List of Figures .................................................................................................................................................................................. VIII
List of Tables ...................................................................................................................................................................................... IX
Chapter 1 ............................................................................................................................................................................................... 1
1.1 Project Overview .............................................................................................................................................................................1
1.2 Problem Statement .......................................................................................................................................................................... 1
1.3 Project Objectives ........................................................................................................................................................................... 2
1.4 Expected Outcomes .........................................................................................................................................................................3
1.5 Project Methodology ....................................................................................................................................................................... 4
1.6 Report Outline ................................................................................................................................................................................. 5
Chapter 2 ............................................................................................................................................................................................... 6
2.1 Literature Review ............................................................................................................................................................................6
2.1.1 Problem Statement .............................................................................................................................................................. 6
2.1.2 Key Terms/Concepts ........................................................................................................................................................... 6
2.1.3 Scope of Review ..................................................................................................................................................................7
2.2 Analysis Of Mobile Robot Indoor Mapping Using Gmapping Based Slam With Different Parameter .........................................7
2.3 The Sensor Based Random Graph Method For Cooperative Robot Exploration ........................................................................... 8
2.4 A Frontier-Based Approach For Autonomous Exploration ............................................................................................................ 8
2.5 A Path Planning Approach To Compute The Smallest Robust Forward Invariant Sets .................................................................9
2.6 Real Time Autonomous Ground Vehicle Navigation In Heterogeneous Environments Using a 3D Lidar ................................... 9
2.7 Autonomous Robotic Exploration Based On Multiple Rapidly-Exploring Randomized Trees ................................................... 10
Chapter 3 ............................................................................................................................................................................................. 12
3.1 System Design ...............................................................................................................................................................................12
3.1.1 Block Diagram .................................................................................................................................................................. 12
3.2 Hardware Design ...........................................................................................................................................................................14
3.2.1 Turtlebot3 .......................................................................................................................................................................... 14
3.2.2 LiDAR ............................................................................................................................................................................... 15
3.2.3 Raspberry Pi 4 ................................................................................................................................................................... 17
3.2.4 Motor Controllers and Motors ...........................................................................................................................................18
3.3 Software Design ............................................................................................................................................................................ 18
3.3.1 ROS ................................................................................................................................................................................... 18
3.3.2 Gazebo ...............................................................................................................................................................................18
3.3.3 Rviz ................................................................................................................................................................................... 19
3.4 Flow Chart .....................................................................................................................................................................................20
3.5 Inputs, Outputs, and Processing .................................................................................................................................................... 22
3.5.1 Inputs ................................................................................................................................................................................. 22
3.5.2 Outputs .............................................................................................................................................................................. 22
3.5.3 Processing - Algorithms Explanation ................................................................................................................................22
3.6 Gantt Chart .................................................................................................................................................................................... 23
Chapter 4 ............................................................................................................................................................................................. 24
4.1 Algorithm Implementation ............................................................................................................................................................24
4.1.1 Introduction to SLAM ....................................................................................................................................................... 24
4.1.2 SLAM Algorithm Selection and Rationale ....................................................................................................................... 25
4.2 RRT Algorithm and Adaptions for Dynamic Obstacles ............................................................................................................... 27
4.2.1 Basic RRT Algorithm ........................................................................................................................................................27
4.2.2 Adaptations for Dynamic Obstacles ..................................................................................................................................28
4.2.3 Implementation considerations ..........................................................................................................................................28
Chapter 5 ............................................................................................................................................................................................. 30
5.1 Experimental Setup ....................................................................................................................................................................... 30
5.1.1 Description of Testing Environment ................................................................................................................................. 30
5.1.2 Map Generation .................................................................................................................................................................31
5.2 ROS (Robot Operating System) Configuration: ........................................................................................................................... 32
5.3 Launch Files Setup ........................................................................................................................................................................33
5.3.1 Problem Being Solved .......................................................................................................................................................33
5.3.2 Proposed Solution ..............................................................................................................................................................33
5.3.3 Pseudocode ........................................................................................................................................................................34
5.3.4 Code ...................................................................................................................................................................................34
5.4 SLAM Initialization ...................................................................................................................................................................... 34
5.4.1 Problem Being Solved .......................................................................................................................................................34
VI
5.4.2 Proposed Solution ..............................................................................................................................................................35
5.4.3 Pseudocode ........................................................................................................................................................................35
5.4.4 Code ...................................................................................................................................................................................35
5.5 RRT Exploration Initialization ......................................................................................................................................................36
5.5.1 Problem Being Solved .......................................................................................................................................................36
5.5.2 Proposed Solution ..............................................................................................................................................................36
5.5.3 Pseudocode ........................................................................................................................................................................36
5.5.4 Code ...................................................................................................................................................................................37
5.6 Autonomous Navigation ............................................................................................................................................................... 38
5.7 Hardware Setup ............................................................................................................................................................................. 39
5.7.1 Connection with Raspberry Pi 4 ........................................................................................................................................40
5.7.2 Real-Time Ranging Visualization ..................................................................................................................................... 40
5.8 Limitations .................................................................................................................................................................................... 45
5.8.1 Driver Availability ............................................................................................................................................................ 45
5.8.2 Computational Constraints ................................................................................................................................................ 46
5.8.3 Implementation Complexity ..............................................................................................................................................46
5.8.4 Reliability and Robustness ................................................................................................................................................ 46
Chapter 6 ............................................................................................................................................................................................. 47
6.1 Results and Analysis ..................................................................................................................................................................... 47
6.2 Detailed Findings .......................................................................................................................................................................... 48
6.2.1 Map Generation .................................................................................................................................................................48
6.2.2 Map Navigation .................................................................................................................................................................50
6.2.3 Impact of Motor Speed on Data Collection and Mapping Accuracy in Autonomous Robots ..........................................51
Chapter 7 ............................................................................................................................................................................................. 52
7.1 Sustainable Development ..............................................................................................................................................................52
7.1.1 Autonomous Exploration and Mapping Robot in Sustainable Development ................................................................... 52
7.2 Contribution to Sustainable Development .................................................................................................................................... 53
7.2.1 Environmental Monitoring and Conservation ...................................................................................................................53
7.2.2 Urban Planning and Smart Cities ...................................................................................................................................... 53
7.2.3 Agriculture and Food Security .......................................................................................................................................... 54
7.2.4 Disaster Management ........................................................................................................................................................ 55
7.2.5 Scientific Research and Education .................................................................................................................................... 55
7.2.6 Renewable Energy Management .......................................................................................................................................55
7.2.7 Reducing Carbon Footprint ............................................................................................................................................... 56
7.3 Effect on environment ...................................................................................................................................................................57
7.3.1 Environmental Monitoring and Conservation ...................................................................................................................57
7.3.2 Urban Planning and Smart Cities ...................................................................................................................................... 58
7.3.3 Agriculture and Food Security .......................................................................................................................................... 58
7.3.4 Disaster Management ........................................................................................................................................................ 59
7.3.5 Scientific Research and Education .................................................................................................................................... 59
7.3.6 Renewable Energy Management .......................................................................................................................................59
7.3.7 Reducing Carbon Footprint ............................................................................................................................................... 59
Chapter 8 ............................................................................................................................................................................................. 62
8.1 Summary ....................................................................................................................................................................................... 62
8.1.1 Summary of Objectives and Achievements ...................................................................................................................... 62
8.1.2 Key Findings and Results ..................................................................................................................................................62
8.1.3 Challenges Faced ...............................................................................................................................................................64
8.2 Conclusion .....................................................................................................................................................................................64
References ........................................................................................................................................................................................... 65
VII
List of Figures
Figure 3.1: Block Diagram ...............................................................................................................................................12
Figure 3.2: Turtlebot3 ...................................................................................................................................................... 15
Figure 3.3: Working Principle of LiDAR ........................................................................................................................ 16
Figure 3.4: LiDAR ........................................................................................................................................................... 17
Figure 3.5: Raspberry Pi 4 ............................................................................................................................................... 17
Figure 3.6: Environment .................................................................................................................................................. 19
Figure 3.7: Turtlebot3 in Gazebo Simulator .................................................................................................................... 19
Figure 3.8: Flow Chart ..................................................................................................................................................... 20
Figure 3.9: Gantt Chart .....................................................................................................................................................23
Figure 4.1: Exploration Strategy ......................................................................................................................................25
Figure 4.2: Flowchart of Working Principle of Gmapping Algorithm ............................................................................ 27
Figure 4.3: Working of RRT ............................................................................................................................................28
Figure 4.4: Nodes in Tree Structure of RRT ...................................................................................................................29
Figure 5.1: Mapping Environment ...................................................................................................................................30
Figure 5.2: Generating Map by turtlebot3 (using LIDAR sensor) ...................................................................................31
Figure 5.3: Outcome ........................................................................................................................................................ 32
Figure 5.4: Navigation of Mapped Environment ............................................................................................................. 38
Figure 5.5: LiDAR ........................................................................................................................................................... 39
Figure 5.6: Connection with Raspberry Pi 4 ....................................................................................................................40
Figure 6.1 (a): Environment .............................................................................................................................................47
Figure 6.1 (b): Robot Performance .................................................................................................................................. 47
Figure 6.2: Distance traveled by Robot ........................................................................................................................... 48
Figure 6.3 (a): Map Navigation ........................................................................................................................................50
Figure 6.3 (b): Map Navigation ....................................................................................................................................... 50
Figure 6.3 (c): Map Navigation ........................................................................................................................................51
Figure 7.1: Illustrates the Sustainable Development Goals ............................................................................................. 52
Figure 8.1: Implementation of SLAM ............................................................................................................................. 62
Figure 8.2: Illustration of Autonomous Navigation using RRT Algorithm .....................................................................63
VIII
List of Tables
IX
Chapter 1
Autonomous exploring and mapping robots employ a sophisticated combination of sensors [2] algorithms,
and control systems to perceive their environment, plan paths, avoid obstacles, and generate maps. The
integration of computer vision and artificial intelligence allows these robots to interpret complex data and
make real-time decisions, significantly enhancing their efficiency and reliability. These capabilities are
crucial for applications such as disaster response, where robots need to operate in unpredictable and often
dangerous environments, and industrial automation, where they can streamline processes and improve safety.
The importance of autonomous exploring and mapping robots is underscored by the growing demand for
automated solutions in various industries. For instance, in search and rescue operations [11] autonomous
robots can enter dangerous areas, locate survivors, and provide critical information to human responds. In
agriculture, they can map fields, monitor crop health, and identify areas needing attention, thereby increasing
efficiency and yield. In industrial settings, they can navigate large warehouses, track inventory, and ensure
operational efficiency without human intervention. The versatility and potential of these robots make them a
focal point of modern robotics research and development.
The specific problem addressed by this project is the development of a robot that can autonomously explore
and map an unknown environment with a high degree of accuracy and reliability [2]. This includes
overcoming several key challenges:
Sensor Accuracy and Integration: Ensuring that the robot's sensors can accurately perceive the
environment, detect obstacles, and gather data for mapping. This involves integrating various types of
1
sensors, such as LiDAR, cameras, and ultrasonic sensors, to provide a comprehensive understanding of the
surroundings.
Real-Time Data Processing: Developing algorithms that can process sensor data in real-time, allowing the
robot to make immediate decisions about navigation and obstacle avoidance [11]. This requires optimizing
computational efficiency to ensure that the robot can operate effectively in dynamic environments.
Path Planning and Navigation: Creating robust path planning algorithms that enable the robot to navigate
efficiently while avoiding obstacles [1], [5], [7]. This involves designing strategies that can adapt to
changing conditions and unforeseen obstacles in the environment.
Simultaneous Localization and Mapping (SLAM): Implementing SLAM techniques that allow the robot
to build a map of its environment while simultaneously tracking its location within that map [2]. This dual
capability is essential for autonomous exploration and requires advanced algorithmic solutions.
Environmental Adaptability: Ensuring that the robot can adapt to a variety of environments, from
structured indoor settings to unstructured outdoor terrains. This includes developing versatile algorithms and
robust hardware capable of handling different scenarios.
The challenges associated with autonomous exploration and mapping are compounded by the need for
seamless integration of hardware and software components. This project aims to address these challenges by
developing a comprehensive system that leverages cutting-edge technologies in computer vision, machine
learning, and robotics.
Autonomous Navigation: Develop a robot capable of navigating unknown environments without human
intervention. This includes real-time path planning and obstacle avoidance using advanced sensors and
algorithms. The navigation system should be able to operate in both structured and unstructured
environments, adapting to different conditions and challenges.
Environment Mapping: Implement techniques for creating detailed and accurate maps of the explored area.
This involves using SLAM (Simultaneous Localization and Mapping) to concurrently map the environment
and track the robot's location within it [3]. The mapping system should be able to handle dynamic changes in
the environment and update the map in real-time.
Obstacle Detection and Avoidance: Integrate sensors and algorithms to detect and avoid obstacles
dynamically, ensuring the robot can navigate safely and efficiently. This includes developing robust obstacle
detection mechanisms that can handle different types of obstacles, from static objects to moving entities.
2
Real-Time Data Processing: Ensure the robot can process sensor data and make navigation decisions in
real-time, maintaining high performance and accuracy [11]. This involves optimizing the computational
algorithms and hardware to ensure quick and reliable data processing.
System Integration: Achieve seamless integration of hardware and software components, including sensors,
micro-controllers, and algorithms, to create a cohesive and functional autonomous robot [2]. The integration
should ensure that all components work together harmoniously, providing a reliable and efficient system.
Scalability and Adaptability: Design a scalable and adaptable system that can be extended or modified for
various applications. This includes ensuring that the system can be easily adapted to different environments
and tasks, demonstrating its versatility and potential for broader use.
Functional Autonomous Robot: A fully functional robot capable of autonomous exploration and mapping
in various environments. The robot should be able to navigate, detect obstacles, and create accurate maps
without human intervention. This outcome includes demonstrating the robot's capabilities in different
scenarios, such as indoor navigation, outdoor exploration, and complex environments.
Enhanced Mapping Techniques: Improved techniques for real-time environment mapping, leveraging
advancements in computer vision and machine learning to enhance the accuracy and detail of generated
maps. This includes developing new algorithms for SLAM and exploring innovative approaches to map
representation and visualization.
Reliable Obstacle Avoidance: Robust obstacle detection and avoidance mechanisms that ensure safe and
efficient navigation in dynamic and unpredictable environments. This involves testing the robot's ability to
handle various types of obstacles and ensuring that the avoidance system is reliable and effective.
Scalable and Adaptable System: A scalable and adaptable system design that can be extended or modified
for various applications, demonstrating the potential for broader use in different domains. This includes
showcasing the system's versatility and its ability to be customized for specific tasks and environments.
Contribution to Research and Development: Valuable contributions to the field of autonomous robotics,
providing insights, methodologies, and data that can inform future research and development efforts. This
includes publishing research findings, sharing data sets, and contributing to the academic and industrial
communities.
Potential Applications: Identification of potential applications and impact of the autonomous exploring and
mapping robot in fields such as search and rescue operations, environmental monitoring, industrial
automation, agriculture, and more. This includes exploring real-world applications and demonstrating the
practical benefits of the developed system.
3
1.5 Project Methodology
This project involves developing an autonomous robot capable of exploring and mapping environments. The
process includes software setup, algorithm development, testing, and hardware integration.
Begin by installing the Ubuntu operating system to support ROS. Set up ROS to provide tools and libraries
necessary for developing robot software.
Implement the Rapidly-exploring Random Tree (RRT) algorithm for efficient path planning in unknown
environments.
Used for creating maps and tracking the robot’s position simultaneously. Applied for real-time map updates
and accuracy.
4. Simulation Environment
Utilize Gazebo to test navigation and mapping algorithms virtually before hardware deployment.
5. Algorithm Testing
6. Decision Making
Refine and test algorithms to ensure reliability. Combine successful algorithms to create a cohesive system
for exploration and mapping.
7. Hardware Integration.
8. Final Integration
Merge developed algorithms with hardware to finalize the autonomous exploring and mapping robot system.
4
1.6 Report Outline
The report is structured into eight chapters, aligning with the objectives and outlines of the study.
The chapters are organized as follows:
Chapter 1: This chapter introduces the project, outlining its goals, significance, and scope. It sets the stage
for the entire report by explaining the problem the project aims to address and the expected outcomes.
Chapter 2: This chapter reviews existing research related to autonomous exploring and mapping robots. It
identifies key concepts, summarizes significant studies, and highlights gaps in the current literature.
Chapter 3: This chapter provides an overview of the system architecture, detailing both the hardware and
software components used. It explains how each component fits into the overall design and describes the
inputs, outputs, and processing required for the robot's operation.
Chapter 4: This chapter discusses the algorithms used in the project, focusing on the implementation of
SLAM (Simultaneous Localization and Mapping) and the RRT (Rapidly-exploring Random Tree) algorithm.
It explains how these algorithms are adapted for dynamic environments and the rationale behind their
selection.
Chapter 5: This chapter describes the experimental setup, including the testing environment, configuration,
and limitations. It outlines the process of initializing and testing the SLAM and RRT exploration algorithms
in both simulated and real-world environments.
Chapter 6: In this chapter, the results of the experiments are presented and analyzed. It discusses the
performance of the robot in map generation and navigation, and examines how factors such as motor speed
affect data collection and mapping accuracy.
Chapter 7: This chapter explores the robot's contributions to sustainable development, discussing its
potential applications in various fields and its positive impact on the environment.
Chapter 8: The final chapter summarizes the project's key findings, achievements, and challenges. It reflects
on the objectives and provides recommendations for future work.
References: A comprehensive list of the sources cited throughout the report, ensuring proper
acknowledgment and citation of relevant literature.
5
Chapter 2
SLAM (Simultaneous Localization and Mapping): A computational problem and technique used in
robotics to construct or update a map of an unknown environment while simultaneously keeping track of the
robot's location within it.
Path Planning: The process by which a robot determines an optimal path from its current location to a
desired destination while avoiding obstacles.
Obstacle Avoidance: The capability of a robot to detect and navigate around obstacles in its path to ensure
safe and efficient movement.
Sensor Technologies: Examination of various sensory technologies, with a focus on LiDAR, and their roles
in environmental perception and mapping.
6
SLAM Algorithms: Analysis of different SLAM methodologies, their strengths, limitations, and
applications in autonomous navigation.
Machine Learning in Robotics: Exploration of machine learning techniques employed for path planning
and obstacle avoidance, highlighting recent advancements and practical implementations.
Applications of Autonomous Robots: Review of the potential applications of autonomous exploring and
mapping robots in fields such as search and rescue, environmental monitoring, and industrial automation.
The experimental setup involved a mobile robot with Hokuyo Laser Range Finder and netbook, with
wireless communication through a router. The Experiment was conducted in two different lab environments,
assessing the impact of varying robot speed, mapping delay, and particle filter on mapping quality. The
results showed that varying parameters significantly affect mapping accuracy and processing time. Best
results achieved with a robot speed of 0.1333 m/s, mapping delay of 1s, and a particle filter of 30.
In conclusion the study demonstrates the importance of optimizing SLAM parameters for improving
mapping accuracy and efficiency in mobile robots.
The findings can be applied to enhance the performance of mobile robots in various practical applications,
including search and rescue and autonomous navigation.( W.A.S Norzam, H.F.Hawari, K.Kamarudin, 2019)
[13].
7
2.3 The Sensor Based Random Graph Method For
Cooperative Robot Exploration
PETIS (Programmable Vehicle with Integrated Sensor) aims to create an autonomous robot that can navigate
independently. The research focuses on the robot's sense of sight using the LDS-01 LIDAR sensor due to its
affordability and community support. The LIDAR (Light Detection and Ranging) sensor uses laser pulses to
measure distances, producing high-precision data for mapping environments. Compared to other sensors like
ultrasonic and camera-based systems, LIDAR provides better accuracy and faster processing.
The LDS-01 is a low-cost LIDAR sensor from ROBOTIS, capable of 360-degree scanning with a range of
12 cm to 350 cm, although tests showed it performs optimally between 29.9 cm and 290.7 cm. It connects to
a Raspberry Pi through a USB interface, with ROS (Robot Operating System) providing software support.
The study involved setting up the LIDAR sensor, ensuring proper hardware and software connections, and
processing the data .Data was collected in a controlled environment (5m x 5m classroom), capturing
readings over a 5-minute period. The data from the LIDAR was stored in text format due to its large size,
and processed into Cartesian coordinates using trigonometric functions.
The sensor produced around 537,297 rows of data per 5 minutes, with significant noise observed near the
sensor. Data from the LIDAR was stored in text format due to its large size, and processed into Cartesian
coordinates using trigonometric functions. The sensor produced around 537,297 rows of data per 5 minutes,
with significant noise observed near the sensor.
The LIDAR sensor's readings were mapped to a 2D plane, highlighting its effective range and limitations.
Statistical analysis of the data showed a normal distribution with specific quartiles and bounds.
Noise in the data needs to be addressed using filtering algorithms like the Extended Kalman Filter
(EKF).Future research will involve implementing SLAM (Simultaneous Localization and Mapping) to
improve environmental sensing and mapping accuracy.
The study successfully demonstrated the use of the LDS-01 LIDAR sensor for 2D mapping and boundary
detection. The sensor's performance was slightly below its specified range, suggesting the need for further
calibration and algorithmic improvements (Franchi, L. Freda, G. Oriolo, and M. Vendittelli)[8].
In a particular environment Evidence grids are used to represent the environment, with each cell storing the
probability of occupancy. The robot plans and executes paths to reach these detected frontiers. To improve
map accuracy, sonar data is combined with laser data to reduce the impact of specular reflections.
8
The paper presents experimental results obtained from a Nomad 200 mobile robot equipped with a laser
rangefinder, sonar sensors, and infrared sensors. The robot explored two real-world office environments
filled with various obstacles like chairs, desks, tables, and boxes.
The Frontier-based exploration method clearly showed advantage in its ability to handle both open and
cluttered spaces and its capability to deal with walls and obstacles in arbitrary orientations. It proved to
provide efficient exploration by focusing on information-rich frontiers (Brian Yamauchi, 2020) [6].
Traditionally, computing the smallest RFIS has been computationally challenging. This method leverages
path planning algorithms, specifically the A* algorithm, to approximate the RFIS boundary. The state space
is discretized, and a cost function is employed to guide the path planning process. The algorithm iteratively
refines the RFIS approximation.
While effective for two-dimensional systems with additive disturbances, the method's applicability to higher
dimensions and different disturbance types remains an area for future exploration.
The proposed method is limited to two-dimensional systems with additive disturbances. The accuracy of the
approximation depends on the discretization level and other parameters (Shayok Mukhopadhyay and Fumin
Zhang, 2014) [5].
The system successfully performed over 60 km of autonomous driving in various conditions without system
failures and it demonstrated robustness to map changes by successfully navigating with both current and
outdated maps. After comparison with RTK-GPS ground truth showed good localization accuracy, with
higher variance in dynamic or challenging environments (e.g., bushland).
Furthermore the use of a pyramidal multi-level surface map improved localization accuracy and efficiency.
The system's independence from GPS, pattern recognition, and artificial landmarks makes it suitable for
various outdoor environments.
The paper evaluates the C-LOC algorithm in a heterogeneous environment consisting of:
9
Densely built areas: The algorithm demonstrated reliable performance in areas with buildings, suggesting
that the 3D LiDAR and SLAM approach is well-suited for such environments.
Sparsely built areas and bushland: While the algorithm still functioned, its performance was less robust in
these areas due to the dynamic nature of vegetation and the potential for occlusion. This is reflected in the
higher localization variance observed in these regions.
Parking lots: The presence of moving cars introduced additional challenges, but the system was still able to
perform adequately.
The authors tested the system's resilience to map changes by using both current and outdated maps. The
vehicle successfully completed autonomous runs using a map that was 1.5 months old, demonstrating the
system's robustness to environmental changes.
While localization was still possible with the outdated map, the performance was slightly degraded, as
indicated by increased localization variance. This is expected due to changes in the environment over time.
Overall, the experimental results highlight the C-LOC algorithm's ability to handle diverse environments
while demonstrating the importance of map freshness for optimal performance (Pfrunder, Paulo V K Borges1 ,
Adrian R Romero1 , Gavin Catt1 , Alberto Elfes1, 2017) [11].
Key Findings:
The chosen parameters, including robot speed, mapping delay, and particle filter size, significantly affect
both the accuracy and time taken for map creation. This ensures that a balance between robot speed,
mapping delay, and particle filter size is crucial for achieving optimal mapping results. A faster robot speed
with a shorter mapping delay can reduce mapping time but may compromise accuracy due to increased
computational load and potential data loss.
Higher robot speeds and shorter mapping delays generally result in faster mapping times but can negatively
impact map accuracy. Conversely, lower speeds and longer delays improve accuracy but increase mapping
time. The complexity of the environment (size, feature density) also influences mapping performance.
Larger environments with fewer features require more time to map accurately.
While the authors’ focuses solely on G-Mapping and parameter optimization, it's essential to consider other
SLAM methods for a comprehensive understanding. A direct comparison within the paper is lacking, but
based on general knowledge and other research, we can outline the strengths and weaknesses of G-Mapping
relative to other common SLAM techniques.
10
G-Mapping, based on Rao-Blackwellized Particle Filters (RBPF), is a popular choice for 2D SLAM due to
its relative simplicity and computational efficiency. However, it has limitations compared to other methods.
G-Mapping is efficient and has the ability to handle dynamic environments, and it is relatively easy to
implement. However, G-Mapping Prone to particle depletion and its accuracy can degrade in large-scale
environments or with loop closures. Also it is limited to 2D mapping only (Hassan Umari and Shayok
Mukhopadhyay2, 2017) [1].
11
Chapter 3
The block diagram illustrates the comprehensive system design and workflow for developing an autonomous
exploring and mapping robot. Here is a detailed explanation of each component and its role in the overall
system:
12
1. ROS (Robotic Operating System)
Install Ubuntu: The process starts with the installation of the Ubuntu operating system, which is a
prerequisite for running ROS.
ROS Installation: ROS is installed on the Ubuntu operating system. ROS provides the necessary
tools and libraries for robot software development [12].
RRT Algorithm: The Rapidly-exploring Random Tree (RRT) algorithm is used for path planning
and navigation [7]. It helps in finding an efficient path for the robot to explore unknown
environments.
Cartographer Algorithm: This algorithm is used for SLAM, which involves building a map of the
environment while simultaneously keeping track of the robot’s position within that map.
Gmapping Algorithm: Another SLAM algorithm used for creating and updating the map in real-
time [14].
4. Simulation Environment
Gazebo Simulator: Gazebo is a simulation environment in ROS where the robot’s algorithms can be
tested virtually before implementing them on actual hardware [12]. It helps in evaluating the
performance of navigation and mapping algorithms.
5. Algorithm Testing
Mapping Accuracy: The accuracy of the maps generated by the SLAM algorithms is tested in the
simulation environment.
Exploration and Coverage: The robot's ability to explore and cover the environment efficiently is
tested.
Path Planning Performance: The performance of the path planning algorithms (like RRT) is
evaluated to ensure they find the optimal path.
6. Decision Making
Accuracy Check: After testing, if the algorithms are accurate, they are combined. If not, further
refinement and testing are done.
Combine Algorithms: The successful algorithms are combined to create a robust system for
exploration and mapping.
7. Hardware Integration
Hardware Components:
TurtleBot3: The chosen robotic platform that includes all necessary components.
LiDAR: A sensor used for distance measurement and environment scanning.
13
OpenCR: An embedded controller board used to interface with sensors and actuators.
DYNAMIXEL Wheels: Actuators used for robot movement.
Raspberry Pi: A microcontroller that serves as the central processing unit.
Hardware Assembly: All hardware components are assembled to form the physical robot.
8. Final Integration
Software/Hardware Integration: The combined algorithms and software are integrated with the
hardware components to create the final autonomous exploring and mapping robot system [12].
This diagram provides a clear flow of how the autonomous exploring and mapping robot system is designed,
from initial software setup to algorithm development and testing, and finally to hardware integration and
deployment. Each step ensures that the robot is capable of accurately mapping and navigating its
environment autonomously.
3.2.1 Turtlebot3
Turtlebot3 (Figure 3.2) is a low-cost, personal robot kit with open-source software, making it an ideal
platform for research, education, and product prototyping. It is designed to support the Robot Operating
System (ROS), [12] which provides libraries and tools to help software developers create robot applications.
Key features of the Turtlebot3 include:
Modular Design: The Turtlebot3's modular design allows users to easily customize and upgrade the robot
with different sensors, actuators, and computing platforms.
14
Compact Size: Its small footprint makes it suitable for navigating through tight spaces and performing tasks
in indoor environments.
Open Source: Both the hardware and software of the Turtlebot3 are open source, allowing for extensive
customization and community-driven improvements.
Scalability: The platform supports various configurations, from basic models suitable for beginners to more
advanced setups for complex research projects.
3.2.2 LiDAR
What is LiDAR Sensor?
LIDAR (Light Detection and Ranging) is a remote sensing technology that uses laser light to measure
distances to objects. It works by emitting laser pulses, which then bounce back from objects to the sensor as
shown in Figure 3.3. By measuring the time it takes for the pulses to return, the system calculates the
distance to the object.
Working Principle
Emission: The LIDAR sensor emits a laser pulse towards the target.
Reflection: The laser pulse hits an object and reflects back to the sensor.
Detection: The sensor detects the reflected pulse.
Calculation: The system calculates the distance to the object based on the time it took for the pulse to return,
using the formula:
15
����� �� ���ℎ� × ���� �� ����ℎ�
Distance =
2
LiDAR
The LiDAR (Figure 3.4) is a compact and cost-effective distance sensor that provides accurate and reliable
distance measurements. It is widely used in robotics for navigation, obstacle detection, and mapping. Key
features of the LiDAR include:
High Precision: It offers accurate distance measurements with a range of up to 8 meters and an accuracy of
±6 cm.
Compact and Lightweight: The small size and low weight make it easy to integrate into various robotic
platforms.
Fast Response Time: It can provide up to 250 measurements per second, allowing for real-time obstacle
detection and avoidance.
Low Power Consumption: The sensor is energy-efficient, making it suitable for battery-powered
applications.
16
Figure 3.4: LiDAR
3.2.3 Raspberry Pi 4
The Raspberry Pi 4 (Figure 3.5) is a popular choice for the processing unit in many robotic systems,
including the Turtlebot3. It offers a good balance between performance, power consumption, and cost. Key
features of the Raspberry Pi 4 include:
Processor: The Raspberry Pi 4 is equipped with a quad-core ARM Cortex-A72 processor, running at 1.5
GHz. This provides sufficient computational power for most robotic applications.
Memory: It comes with multiple RAM options (2GB, 4GB, or 8GB), allowing users to choose based on
their performance requirements.
Connectivity: The Raspberry Pi 4 offers a range of connectivity options, including USB 3.0, Ethernet, Wi-
Fi, and Bluetooth, which are essential for interfacing with sensors, actuators, and other peripherals.
Expandability: It has multiple GPIO pins and interfaces (SPI, I2C, UART), enabling easy integration with
various sensors and modules.
Software Support: It runs a variety of operating systems, including Raspbian (Raspberry Pi OS) and
Ubuntu, both of which support ROS, making it a versatile and developer-friendly platform.
17
3.2.4 Motor Controllers and Motors
Motor controllers are essential components that regulate the speed, direction, and torque of the motors used
in robotic systems. In the context of the Turtlebot3, motor controllers play a critical role in ensuring smooth
and precise movement. Key features of the motor controllers include:
Speed Control: They provide precise control over the speed of the motors, which is crucial for tasks such as
navigation and path following.
Direction Control: Motor controllers allow for the easy reversal of motor direction, enabling the robot to
move forward, backward, and turn.
Torque Regulation: They help manage the torque delivered to the motors, which is important for
maintaining stability and handling various terrains.
Integration with ROS: The motor controllers used in Turtlebot3 are designed to integrate seamlessly with
the ROS framework, facilitating easy communication and control through ROS nodes and topics.
3.3.1 ROS
The Robot Operating System (ROS) is a flexible framework for writing robot software. It provides a
collection of tools, libraries, and conventions aimed at simplifying the task of creating complex and robust
robot behavior across a wide variety of robotic platforms [9]. ROS is structured in a modular fashion,
allowing for the integration of different packages and components, which can be reused and shared across
various projects. Its communication infrastructure is based on nodes, topics, and services, enabling
distributed processing and seamless integration of sensors, actuators, and algorithms. ROS also includes
simulation capabilities and interfaces to several hardware abstraction layers, making it an essential tool for
both academic research and industrial applications.
3.3.2 Gazebo
Gazebo is a powerful open-source robotics simulator that integrates with ROS to provide a rich
development environment for testing and developing algorithms, designing robots, and performing
regression testing using realistic scenarios as shown in Figure 3.6. It offers a high-fidelity physics engine,
a rich library of robot models and environments, and robust sensor simulation capabilities. Gazebo enables
users to simulate populations of robots (Figure 3.7) in complex indoor and outdoor environments, with
accurate rendering and dynamic interactions. The ability to model the physical properties of robots and
environments, including friction, gravity, and lighting, allows for detailed and realistic testing before
deployment in real-world scenarios.
18
Figure 3.6: Environment
3.3.3 Rviz
Rviz, short for ROS visualization, is a 3D visualization tool for ROS applications. It allows developers to
visualize sensor data, state information, and the robot’s environment in real-time. Rviz supports various
types of data, including point clouds, laser scans, occupancy grids, and transforms, making it an invaluable
tool for debugging and development [4]. Users can interact with the visualization by adding, removing,
and configuring displays for different data types, which helps in understanding the robot's perception and
actions within the environment. Rviz's flexibility and ease of use make it a crucial component in the
development and testing phases of robotic systems, aiding in the rapid identification and resolution of
issues.
19
3.4 Flow Chart
This flowchart (Figure 3.8) represents the feature-based Autonomous Exploration Algorithm (AEA) for a
mapping robot. The process involves both local and global search strategies to determine the robot’s next
exploration goal and manage the exploration process efficiently [1].
1. Start
3. Obtain Features
The robot finds the best goal and scores the exploration index from the obtained features. The
exploration index indicates how beneficial exploring a particular goal is.
Global Search: Checks if the exploration index is 0 (no more beneficial goals found globally).
Yes: If the index is 0, the algorithm returns the finish mapping condition.
No: If the index is not 0, it stores the actual goal and exploration index, and continues the
process.
Local Search: Checks if the exploration index is 0 (no more beneficial goals found locally).
Yes: If the index is 0, the algorithm switches to global search and returns to the start.
No: If the index is not 0, it stores the actual goal and exploration index, and continues the
process.
The algorithm stores the current goal and its exploration index, ensuring that the robot remembers its
objectives and progress.
When the exploration index is 0 during a global search, the algorithm recognizes that the mapping
process is complete and returns the finish mapping condition.
If no beneficial goals are found in the local search, the algorithm switches to a global search and
starts the process again.
9. Exit
The process ends at the "EXIT" node once the finish mapping condition is met.
The flowchart describes an iterative process where the robot switches between local and global searches to
efficiently map the environment. By evaluating the exploration index, the algorithm ensures the robot always
targets the most beneficial areas for exploration until the entire environment is mapped. This adaptive
strategy allows the robot to balance thorough exploration with efficient mapping.
21
3.5 Inputs, Outputs, and Processing
This section details the inputs and outputs for each major component of the system, as well as the processing
steps involved. Understanding these elements is crucial for the seamless integration and functionality of the
robot.
3.5.1 Inputs
1. Sensor Data:
LiDAR Scans: Distance measurements used for mapping and obstacle detection [2].
2. User Commands:
Initial Configuration Settings: Parameters set by the user to initialize and configure the
robot.
3.5.2 Outputs
1. Actuator Commands:
Motor Speed and Direction Signals: Control signals sent to the motors to achieve the
desired movement.
2. Mapped Environment:
LiDAR Data Processing: Convert raw LiDAR data into a point cloud, filter out noise, and
segment the data to identify obstacles and features [2].
Path Planning Algorithms: Implement algorithms such as A* and Dijkstra’s to find the
shortest path in a known environment, and RRT for more complex, unknown terrains [5].
RRT: Randomly samples the environment to build a tree of possible paths, suitable
for high-dimensional spaces.
Dynamic Obstacle Avoidance: Develop reactive planning strategies that allow the robot to
adjust its path in real-time, avoiding moving and static obstacles.
22
Techniques: Potential fields, velocity obstacles, dynamic window approach.
3. SLAM:
Gmapping: Uses particle filters to estimate the robot’s pose and build a map incrementally
[13].
4. Control Algorithms:
PID Control: Adjust motor commands based on the error between the desired and actual
states, ensuring smooth and precise movement.
The project begins on October 10, 2023, and ends on June 3, 2024. The tasks are organized sequentially,
with each task building on the previous one, indicating a logical progression of activities necessary for
developing the autonomous exploring and mapping robot.
23
Chapter 4
Illustrated in the Figure 4.1, an exploration strategy for a robot is depicted. The robot employs a laser
scanner to detect obstacles and plan its path. A path planner is responsible for laying out a safe route for the
robot based on the information received from the laser scanner. The strategy also incorporates a process
called SLAM (Simultaneous Localization and Mapping), which updates a map of the environment as the
robot explores [1].
24
4. SLAM in the Loop:
While the image emphasizes RRT-based exploration, throughout this process, SLAM is likely
running in thebackground.
As the robot moves and collects sensor data, SLAM incorporates this data to update the map and
refine therobot's location estimate.
Rao-Blackwellized particle filters enable GMapping to maintain a set of particles representing possible
robot poses and update the map incrementally. This approach provides a balance between accuracy and
computational efficiency, making it suitable for real-time operation in dynamic environments.
As you see in Figure 4.2: The Working Principle of Gmapping alogirthm, is described below:
1. Setup:
You'll need a ROS (Robot Operating System) environment configured with the gmapping and rviz
packages.
Ensure your LiDAR driver is set up to publish scan data on a specific ROS topic (e.g., /scan).
Your robot's odometry information should also be published on another topic (e.g., /odom).
25
2. Running GMapping:
A launch file is typically used to launch the gmapping node along with any necessary configuration
parameters.This launch file defines topics for subscribing to sensor data and publishing the map.
3. Data Flow:
The gmapping node subscribes to the LiDAR scan topic (/scan). It also subscribes to the odometry
topic (/odom) for robot movement information.
The LiDAR data provides distance readings in various directions, building a 2D
representation of theenvironment.
GMapping uses the particle filter approach as described earlier. It considers the robot's pose
(location andorientation) as a set of particles.
5. RVIZ Visualization:
Launch RVIZ in a separate terminal.
Add various displays to visualize the process:
Map: This displays the current map built by GMapping.
LaserScan: This shows the live scan data points from the LiDAR in real-time.
TF (Robot Pose): This displays the estimated robot pose (position and orientation)
based on theGMapping results.
26
Figure 4.2: Flowchart of Working Principle of Gmapping Algorithm
Initialization: Start with an initial tree T containing the root node at the start position.
27
Iteration:
1. Sample a random point ɑ in the configuration space.
2. Find the nearest node ɑ in the tree T to ɑ.
3. Generate a new node by moving from e towards ɑ by a step size ϵ.
4. If is in a valid configuration (i.e., not in collision with obstacles), add it to the tree T. Termination:
Repeat the iteration until the tree reaches the goal region or the maximum number ofiterations is
reached.
As illustrated in Figure 4.4, a Rapidly Exploring Random Tree (RRT) is a tree-like structure employed for
robot motion planning. Each node in the tree represents a possible configuration, or pose, of the robot within
its environment [1]. The RRT is constructed iteratively by selecting a random point in the workspace and
attempting to extend the tree towards it. This extension prioritizes small steps to ensure smooth and feasible
motions, while also cleverly avoiding obstacles [1]. Through repeated iterations, the RRT expands, creating
a network of potential paths for the robot to explore its surroundings.
Adapting the RRT algorithm for dynamic obstacles involves various strategies that consider the
movement and velocities of obstacles [12]. These adaptations ensure that the algorithm can plan safe and
efficient paths in environments where obstacles are not static, making RRT a versatile and robust solution
for dynamic path planning challenges.
29
Chapter 5
The (Figure 5.2) below illustrates TurtleBot3 robot is actively generating a map of its environment using a
LIDAR sensor within the Gazebo simulation software. The robot is positioned at the center of a structured
layout featuring white spherical objects arranged in a grid pattern. Surrounding this central area is a black
hexagonal structure with green cylindrical pillars at each vertex.
The blue rays emanating from the central point, where the TurtleBot3 is located, represent the LIDAR
sensor's scanning process. These rays illustrate how the sensor emits laser beams to detect and measure the
distance to nearby objects. The reflections of these laser beams allow the robot to gather spatial data about
its surroundings, which it uses to create a detailed map of the environment.
30
As the LIDAR sensor scans, it collects information about the positions and distances of the surrounding
objects, including the white spheres and the green pillars. This data is crucial for the robot's navigation and
mapping algorithms, enabling it to build an accurate representation of the space it is operating in. By
continuously scanning and updating the map, the TurtleBot3 can plan its movements, avoid obstacles, and
navigate efficiently within the simulated environment.
31
2. Modifying the Simulation Environment:
Environment Customization: The Gazebo simulation environment was modified to reflect the
custom-designed maps. This involved placing models of walls, obstacles, and other
environmental features in Gazebo to match the custom maps.
1. turtlebot3:
Description: The turtlebot3 package includes URDF (Unified Robot Description Format) files,
launch files, and configuration files for the TurtleBot3.
32
Nodes Used: Nodes for controlling and simulating the TurtleBot3, including turtlebot3_bringup
which initializes the robot's sensors and actuators.
2. turtlebot3_gazebo:
Description: This package integrates the TurtleBot3 with the Gazebo simulator, providing
necessary launch files and world files.
Nodes Used: Nodes for launching the Gazebo simulation environment, such as gazebo and
spawn_model.
3. Gmapping:
Description: The gmapping package implements a laser-based SLAM (Simultaneous
Localization and Mapping) algorithm.
Nodes Used: The slam_gmapping node processes LIDAR data to generate a 2D occupancy grid
map in real-time, which can be visualized in RViz.
4. RRT_exploration:
Description: This package implements the RRT (Rapidly-exploring Random Tree) algorithm for
autonomous exploration.
Nodes Used: Custom nodes for executing the RRT exploration algorithm, integrating with the
SLAM map to guide the robot's path planning and navigation.
33
5.3.3 Pseudocode
1. Initialize SLAM:
5.3.4 Code
<launch>
<!-- Node for SLAM using the gmapping package -->
<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
<!-- Set the frame in which the robot base is located -->
<param name="base_frame" value="base_footprint"/>
<!-- Set the frame for odometry data -->
<param name="odom_frame" value="odom"/>
<!-- Set the frame for the generated map -->
<param name="map_frame" value="map"/>
<!-- Topic for laser scan data -->
<param name="scan_topic" value="scan"/>
<!-- SLAM parameter: grid map resolution in meters per cell -->
<param name="delta" value="0.05"/>
<!-- SLAM parameter: update interval in meters for linear movement -->
<param name="linearUpdate" value="0.1"/>
<!-- SLAM parameter: update interval in radians for angular movement -->
<param name="angularUpdate" value="0.1"/>
</node>
34
5.4.2 Proposed Solution
This launch file initializes:
Configures parameters such as base_frame, odom_frame, and map_frame for defining robot
frame transformations and map reference.
Specifies the scan_topic where LIDAR data is published.
Sets parameters like delta, linearUpdate, and angularUpdate to control map resolution and
update rates.
5.4.3 Pseudocode
1. Initialize SLAM Node:
base_frame: Set to "base_footprint" - Defines the frame in which the robot's base is
located.
odom_frame: Set to "odom" - Specifies the frame for publishing odometry data.
map_frame: Set to "map" - Defines the frame for the generated map.
scan_topic: Set to "scan" - Specifies the topic where laser scan data is published.
delta: Set to 0.05 - Grid map resolution in meters per cell.
linearUpdate: Set to 0.1 - Update interval in meters for linear movement.
angularUpdate: Set to 0.1 - Update interval in radians for angular movement.
2. Launch RViz for Visualization:
5.4.4 Code
<launch>
<!-- Node for SLAM using the gmapping package -->
<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
<!-- Set the frame in which the robot base is located -->
<param name="base_frame" value="base_footprint"/>
<!-- Set the frame for odometry data -->
<param name="odom_frame" value="odom"/>
35
<!-- Set the frame for the generated map -->
<param name="map_frame" value="map"/>
<!-- Topic for laser scan data -->
<param name="scan_topic" value="scan"/>
<!-- SLAM parameter: grid map resolution in meters per cell -->
<param name="delta" value="0.05"/>
<!-- SLAM parameter: update interval in meters for linear movement -->
<param name="linearUpdate" value="0.1"/>
<!-- SLAM parameter: update interval in radians for angular movement -->
<param name="angularUpdate" value="0.1"/>
</node>
Configures parameters such as map_topic and scan_topic to integrate with the SLAM output and
LIDAR data.
Defines base_frame for robot base localization.
Sets exploration_radius to specify the radius for exploration around the robot.
Sets goal_tolerance to determine the tolerance for reaching exploration goals.
5.5.3 Pseudocode
1. Initialize RRT Exploration Node:
36
map_topic: Set to "/map" - Specifies the topic where the map generated by SLAM is published.
scan_topic: Set to "/scan" - Specifies the topic where laser scan data is published.
base_frame: Set to "base_footprint" - Defines the reference frame of the robot's base.
exploration_radius: Set to 10.0 - Defines the radius for exploration around the robot.
goal_tolerance: Set to 0.5 - Defines the tolerance for reaching exploration goals.
5.5.4 Code
<launch>
<!-- Node for RRT exploration -->
<node name="rrt_exploration" pkg="rrt_exploration" type="exploration_node" output="screen">
<!-- Topic for the map generated by SLAM -->
<param name="map_topic" value="/map"/>
<!-- Topic for laser scan data -->
<param name="scan_topic" value="/scan"/>
<!-- Frame of the robot base -->
<param name="base_frame" value="base_footprint"/>
<!-- Radius for exploration around the robot -->
<param name="exploration_radius" value="10.0"/>
<!-- Tolerance for goal reaching -->
<param name="goal_tolerance" value="0.5"/>
</node>
In our project, we effectively utilized ROS and various packages to achieve successful simulation and
mapping of an environment using SLAM and RRT algorithms. Using RViz, we visualized real-time sensor
data and map generation, enhancing our ability to monitor and evaluate the mapping process. The detailed
configuration and launch files ensured a robust simulation setup that accurately demonstrated the capabilities
of our autonomous exploring and mapping robot. While we faced challenges with the hardware
implementation, the simulation results effectively showcase our project's objectives and achievements.
37
5.6 Autonomous Navigation
As illustrated in Figure 5.4, a TurtleBot3 robot is engaged in autonomous navigation within a pre-mapped
environment. This process can be broken down into several key steps:
1. Map Acquisition: This critical step likely happened before the scene depicted. It involves creating a
map of the environment the TurtleBot3 will navigate. Commonly, robots use sensors like LIDAR
(Light Detection and Ranging) that emit light pulses to measure distances and build a detailed map.
In the image, the map itself is a two-dimensional representation of the surroundings, likely generated
by this LIDAR data.
2. Localization: Once the map is established, the TurtleBot3 needs to determine its location within that
map. This ongoing process is called localization. Sensors like odometry (wheel encoders) provide
estimates of how far the robot has travelled, but these can accumulate errors. To improve accuracy,
the TurtleBot3 might use sensor data from its LIDAR to constantly compare its surroundings with the
map, refining its position.
3. Path Planning: With a map and its location established, the robot can now plan a path to its
destination point. Path planning algorithms consider factors like the robot's size, obstacles, and the
most efficient route. These algorithms take the map data and the target destination into account to
generate a series of waypoints (intermediate goals) for the robot to follow.
4. Navigation and Control: Armed with a planned path, the TurtleBot3 translates those waypoints into
motor commands. This involves controlling the robot's wheels to move it towards the next waypoint
while staying within the designated path. Sensors like LIDAR or cameras continuously provide
feedback about the environment, allowing the robot to adjust its course if it encounters obstacles or
minor deviations.
5. Re-planning (if necessary): The environment might not be static. The image depicts a simulation,
but in real-world scenarios, unforeseen obstacles or changes might occur. The TurtleBot3's sensors
would detect these and trigger re-planning of the path if necessary. This ensures the robot can adapt
to dynamic situations and still reach its goal.
Software Frameworks: These complex tasks are often facilitated by software frameworks like Robot
Operating System (ROS). ROS provides tools and libraries that help developers create programs to manage
sensors, perform localization and path planning, and control the robot's movement.
38
The meticulously crafted configuration and launch files established a reliable simulation setup, accurately
reflecting the functionalities of the autonomous exploring and mapping robot. This environment facilitated
realistic testing, controlled experimentation, bug detection, and refinement. Despite encountering challenges
with hardware implementation, the successful simulation results served as a strong validation of the project's
core objectives and achievements. The simulation effectively demonstrated the robot's capability to perceive
its surroundings, localize itself, plan its path, and navigate autonomously within the simulated environment.
For the simplest usage, only pins 1-4 are required, which account for VCC, GND, and UART pins
(RXD/TXD). The other two pins can be explored in the datasheet for the LiDAR, for advanced users
interested in interrupts, I2C communication, and data readiness flags.
39
5.7.1 Connection with Raspberry Pi 4
The TF-Luna communicates with the Raspberry Pi via the Universal Asynchronous Receiver-Transmitter
(UART) serial port as shown in Figure 5.6. The port that we will be using is the mini UART, which
correlates to GPIO pins 14/15 (physical pins 8/10). First, the port needs to be enabled via the boot
configuration file on the Raspberry Pi.
BEGIN
FUNCTION read_tfluna_data:
WHILE True:
SET bytes_to_read TO 9
ser.reset_input_buffer()
40
IF bytes_serial[0] == 0x59 AND bytes_serial[1] == 0x59:
FUNCTION set_samp_rate(samp_rate=100):
ser.write(samp_rate_packet)
RETURN
FUNCTION get_version:
ser.write(info_packet)
SET bytes_to_read TO 30
ser.reset_input_buffer()
IF bytes_data[0] == 0x5a:
RETURN
ELSE:
ser.write(info_packet)
41
FUNCTION set_baudrate(baud_indx=4):
SET baud_hex TO [[0x80, 0x25, 0x00], [0x00, 0x4b, 0x00], [0x00, 0x96, 0x00],
prev_ser.write(info_packet)
prev_ser.close()
IF ser_new.isOpen() == False:
ser_new.open()
SET bytes_to_read TO 8
ser_new.reset_input_buffer()
IF bytes_data[0] == 0x5a:
baud_hex[ii][2] == bytes_data[5]]
RETURN ser_new
ELSE:
42
ser_new.write(info_packet)
# Configurations
SET baudrates TO [9600, 19200, 38400, 57600, 115200, 230400, 460800, 921600]
SET prev_indx TO 4
IF prev_ser.isOpen() == False:
prev_ser.open()
SET baud_indx TO 4
CALL set_samp_rate(100)
CALL get_version()
FUNCTION plotter:
fig.subplots_adjust(wspace=0.05)
axs[0].set_xlabel('Sample')
axs[0].set_ylabel('Amplitude')
axs[0].set_xlim([0.0, plot_pts])
axs[0].set_ylim([0.0, 8.0])
axs[1].set_xlim([-1.0, 1.0])
axs[1].set_xticks([])
axs[1].set_ylim([1.0, 2**16])
axs[1].yaxis.tick_right()
axs[1].yaxis.set_label_position('right')
axs[1].set_yscale('log')
43
fig.canvas.draw()
fig.show()
FUNCTION plot_updater:
fig.canvas.restore_region(ax1_bgnd)
fig.canvas.restore_region(ax2_bgnd)
line1.set_ydata(dist_array)
bar1.set_height(strength)
bar1.set_color(plt.cm.Set1(0))
ELSE:
bar1.set_color(plt.cm.Set1(2))
axs[0].draw_artist(line1)
axs[1].draw_artist(bar1)
fig.canvas.blit(axs[0].bbox)
fig.canvas.blit(axs[1].bbox)
fig.canvas.flush_events()
SET dist_array TO []
44
WHILE True:
dist_array.append(distance)
dist_array = dist_array[1:]
ser.close()
END
In the above script, the serial port is being accessed for serial0 at a baudrate of 115200. The serial port is first
opened before reading or writing any commands [ser.Open()]. Then in the test script, 9-bytes are read and
the first two bytes are checked for the correct data format (0x59 and 0x59 are cited as the data return in the
product manual). The code uses blitting to speed up the visualization update. The distance detection is
plotted on a time-series graph, while the signal strength is given in the form of a bar chart. This allows the
user to see if a given object or scan routine is outputting poor signal strength. If an error arises - the wiring
should be checked first.
5.8 Limitations
Range and Accuracy: The TF Luna LIDAR sensor has a limited range and lower resolution compared to
more advanced LIDAR systems. This restricted its ability to detect and accurately map distant objects or fine
details within the environment, resulting in incomplete or inaccurate map data.
Field of View: Although mounting the sensor on a servo motor increased its coverage, the field of view
remained constrained. The rotational speed and angle of the servo motor introduced additional variables that
could affect the consistency and reliability of the data points collected.
Sampling Rate: The sensor's sampling rate was not sufficient for capturing high-frequency environmental
changes, leading to lag in the real-time mapping process. Rapidly moving objects or dynamic changes in the
environment were not accurately represented in the generated maps.
Environmental Interference: The sensor's performance was susceptible to interference from ambient light
and reflective surfaces, which could distort the distance measurements and further degrade the quality of the
mapping output.
45
necessitated the development of custom scripts and workarounds to interface the sensor with ROS,
which proved to be challenging and time-consuming.
46
Chapter 6
The final results demonstrates the effective application G-Mapping utilizing a LIDAR Laser Scanner for 2D
mapping and simultaneous localization in diverse environments. By leveraging the Robot Operating System
(ROS), the research successfully implemented SLAM to localize the robot and construct accurate map
(Figure 6.1 (b)) using laser scan data. The findings highlight the capability of G-Mapping to operate without
odometer inputs and adapt to environments with variable landmark visibility.
47
6.2 Detailed Findings
Figure 6.1, Illustrated, a TurtleBot3 robot, known to have a constant speed of 0.22 meters per second, travels
various distances during different time intervals in its autonomous navigation. The graph shows that it
covered 2.2 meter in 10 seconds, and as along for other distances too.
The map accuracy is calculated when the robot finished the mapping. The equation (1) is used to determine
the accuracy of the map based on the size of the real map layout. The x represents the total length of the map
created by the robot while y is the total length of the real map.
x
Map Accuracy = ×100
y
48
Trial 1
In Table 6.1, An autonomous mapping with TurtleBot3 using Gmapping SLAM, the robot successfully
generated a map spanning 10.2 meters in size within 1 minute 2 seconds. The map achieved a commendable
accuracy of 96%, reflecting its capability to accurately represent 96% of the actual environment. This
indicates that the mapped area closely matches the real-world layout, ensuring reliability for navigation and
autonomous operations. The efficient completion time underscores the robot's ability to swiftly gather and
process sensor data to construct a detailed map suitable for further autonomous tasks.
Particle Filter 30 30 15
Table 6.2, outlines the parameters for three robot mapping trials, likely involving a TurtleBot3. The trials
experiment with different speeds (0.1 m/s in trial 1, increasing to 0.22 m/s in trial 3) to see how speed affects
mapping. They also adjust the map update frequency, with trial 1 updating the least frequently (every 5
seconds) and trial 3 updating most frequently (every 0.1 seconds). Finally, the number of particles used in a
localization algorithm varies (30 in trials 1 and 2, 15 in trial 3) to assess its impact on mapping accuracy and
efficiency. Overall, the experiment seems to be investigating how these factors influence the quality and
speed of robot-generated maps.
49
6.2.2 Map Navigation
Autonomous navigation for a robot from point A (Figure 6.3 (a)) to point B (Figure 6.3 (b)) to point C
(Figure 6.3 (c)) involves using SLAM (Simultaneous Localization and Mapping). Initially, the robot collects
data from sensors like LiDAR, cameras, and IMUs to perceive its environment. Using SLAM, it
simultaneously builds a map and localizes itself within that map. The robot then plans a path to the first
waypoint (point B) by evaluating the map and calculating a safe route, avoiding obstacles. It uses algorithms
such as A* or Dijkstra's for path planning and follows this path using a motion controller that adjusts its
movements based on real-time sensor feedback. Once it reaches point B, the process is repeated for
navigating to point C. Throughout its journey, the robot continuously updates its map and position to ensure
accurate navigation and obstacle avoidance.
50
Figure 6.3 (c): Map Navigation
51
Chapter 7
52
7.2 Contribution to Sustainable Development
The autonomous exploration and mapping robot contributes significantly to sustainable development across
multiple domains. In environmental monitoring, these robots provide critical data on biodiversity, pollution
levels, and habitat conditions without disturbing natural ecosystems. They can track wildlife, assess plant
health, and detect environmental changes, helping in the conservation of endangered species and the
restoration of habitats. In urban planning, the robot’s ability to inspect infrastructure, monitor green spaces,
and optimize resource use aids in the development of smart cities. It can inspect bridges, roads, and
buildings, identifying maintenance needs that prolong the lifespan of structures and ensure safety. By
mapping urban green spaces [16], the robot helps in planning parks and gardens that enhance urban living
conditions and promote mental and physical well-being.
Habitat Mapping: With advanced sensors and mapping technologies, the robot can create detailed maps of
different ecosystems. These maps provide essential data for the preservation and restoration of habitats,
identifying areas that require protection or rehabilitation. Habitat mapping supports sustainable land
management practices and helps mitigate the impacts of human activities on natural environments [16]. It
also aids in the planning of conservation areas and the monitoring of ecological changes over time. For
example, in wetland environments, the robot can gather data on water levels, vegetation health, and soil
conditions, which are crucial for maintaining these biodiverse habitats and preventing further degradation.
Pollution Detection: The robot can be equipped with sensors to detect pollutants in air, water, and soil. It
can identify the sources and extents of contamination, providing valuable information for cleanup efforts. By
continuously monitoring pollution levels, the robot can help in assessing the effectiveness of pollution
control measures and in identifying emerging environmental threats. This capability supports efforts to
maintain clean and healthy environments, essential for human health and biodiversity. For instance, in
industrial areas, the robot can monitor emissions and effluents, providing real-time data to ensure
compliance with environmental regulations and helping to pinpoint sources of pollution for targeted
remediation efforts.
Urban Green Spaces: The robot can map urban green spaces, contributing to the planning and maintenance
of parks and gardens. These green spaces are vital for enhancing urban living conditions, providing
recreational areas, improving air quality, and supporting biodiversity within cities. By ensuring that urban
green spaces are well-maintained and strategically planned, the robot helps create healthier and more livable
cities. For instance, the robot can survey parks to monitor tree health, soil quality, and water usage, ensuring
that these areas remain vibrant and accessible to the public, and contributing to urban resilience against
climate change.
Efficient Resource Use: By providing detailed maps and data, the robot aids in optimizing the use of
resources such as water, electricity, and waste management systems in smart cities. It can monitor and
analyze the efficiency of these systems, identifying opportunities for improvements and reductions in
resource consumption [16]. This optimization is key to developing sustainable urban environments that
minimize their ecological footprint and enhance the quality of life for residents. For example, the robot can
monitor water distribution networks to detect leaks and inefficiencies, ensuring that water resources are used
sustainably and reducing the overall demand on municipal water supplies.
Pest and Disease Monitoring: The robot can detect pests and diseases early, allowing for timely
interventions that minimize crop loss. Early detection reduces the need for chemical pesticides, promoting
more sustainable and eco-friendly farming practices. By ensuring that crops remain healthy and productive,
the robot supports food security and the sustainable development of the agricultural sector. For example, the
robot can identify signs of pest infestations or disease outbreaks in specific crop areas, enabling targeted
treatments that minimize the use of harmful chemicals and preserve the overall health of the agricultural
ecosystem.
Soil Health Monitoring: The robot can assess soil health, providing data that ensures sustainable soil
management practices. Healthy soil is crucial for long-term agricultural productivity, and the robot's ability
to monitor soil conditions helps maintain fertility and productivity. This contributes to the sustainability of
agricultural systems and the protection of this vital natural resource. For instance, the robot can analyze soil
samples for nutrient content, pH levels, and microbial activity, providing farmers with the information
needed to implement soil conservation techniques that enhance soil fertility and prevent erosion.
54
7.2.4 Disaster Management
Search and Rescue: In the aftermath of natural disasters, the robot can explore hazardous areas, locate
survivors, and map damage. This capability facilitates efficient rescue operations, reducing the risk to human
rescuers and increasing the chances of saving lives. The robot's ability to navigate and operate in dangerous
environments makes it an invaluable tool in disaster response efforts. For example, after an earthquake, the
robot can enter collapsed buildings to search for trapped individuals, using advanced sensors to detect signs
of life and communicate their locations to rescue teams, significantly improving the efficiency and safety of
rescue operations.
Disaster Risk Reduction: The robot can help in mapping flood plains, fault lines, and other high-risk areas.
By providing accurate and detailed maps of these areas, it contributes to better preparedness and risk
reduction strategies. This proactive approach to disaster management helps communities mitigate the
impacts of natural disasters and enhances their resilience to future events. For instance, the robot can
conduct surveys in areas prone to landslides, gathering data on soil stability, vegetation cover, and rainfall
patterns to inform early warning systems and land-use planning that reduce the risk of catastrophic events.
Educational Tool: The robot serves as a practical example of robotics and AI in sustainability, inspiring
students and researchers to develop further innovations in sustainable technologies. By demonstrating the
potential of autonomous systems to contribute to sustainable development, it encourages the next generation
of scientists and engineers to focus on sustainability challenges. For instance, the robot can be used in
educational settings to teach students about robotics, environmental science, and data analysis, providing
hands-on learning experiences that highlight the importance of technology in addressing environmental and
societal issues.
55
Resource Mapping: The robot can map potential sites for renewable energy installations, optimizing the
placement and efficiency of solar, wind, and hydroelectric power sources. By identifying the best locations
for these installations, the robot helps maximize the production of renewable energy and contributes to the
transition to a sustainable energy system. For instance, the robot can analyze geographic and environmental
data to determine optimal locations for wind turbines, taking into account factors such as wind patterns, land
use, and ecological impact, ensuring that renewable energy projects are both effective and sustainable.
Energy Efficiency: The use of autonomous robots can lead to more energy-efficient practices in various
industries. By optimizing processes and reducing waste, these robots contribute to overall sustainability. The
energy efficiency of the robots themselves, combined with their ability to improve the efficiency of other
systems [16], enhances their impact on reducing the carbon footprint. For instance, in manufacturing, the
robot can monitor production lines for inefficiencies and malfunctions, enabling real-time adjustments that
reduce energy consumption and minimize waste, contributing to a more sustainable and efficient industrial
process.
Autonomous exploration and mapping robots play a pivotal role in sustainable development across diverse
sectors, as highlighted in Table 7.1. These robots contribute significantly to environmental monitoring and
conservation by enabling comprehensive biodiversity assessment, habitat mapping with high precision, and
pollution detection at extremely low concentrations. In urban planning, they enhance infrastructure
inspection, optimize green space management, and improve resource efficiency in smart cities.
Table 7.1: Autonomous exploration and mapping robot role in sustainable development across diverse
sectors.
56
Domain Contribution Supporting Statistics
Security use of fertilizers and pesticides by up to 50% (FAO).
Pest and Disease - Early detection can save up to 40% of crops that might otherwise be lost to
Monitoring pests and diseases (FAO).
Soil Health - Maintaining healthy soil can improve agricultural productivity by 58%
Monitoring (FAO).
- Timely search and rescue operations can increase survival rates by up to
Disaster Management Search and Rescue
50% in disaster scenarios (UNDRR).
Disaster Risk - Accurate risk mapping can reduce the impact of natural disasters by 30-
Reduction 40% through better preparedness and mitigation strategies (UNDRR).
Scientific Research and - Autonomous data collection can increase the scope and accuracy of
Data Collection
Education research by up to 70% (Nature).
- Hands-on learning with advanced technologies can improve student
Educational Tool engagement and understanding of STEM subjects by 50% (National
Research Council).
Renewable Energy Solar and Wind Farm - Regular inspections can increase the efficiency of solar panels by up to
Management Inspection 10% and reduce maintenance costs by 15-20% (IEA).
- Proper site selection can enhance the efficiency of renewable energy
Resource Mapping
projects by up to 25% (IRENA).
Reducing Carbon Footprint Efficient Operations - Automation can reduce operational carbon emissions by up to 30% (EPA).
- Improved energy efficiency can reduce industrial energy consumption by
Energy Efficiency
up to 20% (DOE).
Moreover, in agriculture, robots support precision farming practices, monitor pests and diseases, and
enhance soil health management. They also excel in disaster management by aiding in search and rescue
operations and mitigating disaster risks through effective mapping. Additionally, robots facilitate scientific
research by collecting data from remote areas, while in education, they serve as engaging tools that foster
STEM learning. Furthermore, in renewable energy management and carbon footprint reduction, robots
improve efficiency in energy systems and industrial operations, thereby advancing sustainability efforts on
multiple fronts. These statistical insights underscore their crucial contributions to a more sustainable future.
57
Habitat Mapping: The precision of robot-generated maps supports effective habitat management strategies.
For example, a study by researchers at the University of Zurich demonstrated that LiDAR-equipped drones
can create detailed 3D models of forests, enhancing biodiversity assessments and conservation planning with
unprecedented accuracy (source: University of Zurich).
Pollution Detection: Robots equipped with advanced sensors play a vital role in monitoring pollution levels.
According to a report by the European Environment Agency, autonomous underwater vehicles (AUVs) have
been instrumental in detecting microplastic pollution in marine environments, aiding in the conservation of
aquatic ecosystems (source: European Environment Agency).
Urban Green Spaces: Robots contribute to the management of urban green spaces, enhancing biodiversity
and air quality. Research conducted by the University of California, Berkeley, found that robotic mowers
and trimmers can reduce greenhouse gas emissions associated with lawn maintenance by up to 30%,
promoting sustainable urban landscaping practices (source: University of California, Berkeley).
Efficient Resource Use: Autonomous systems optimize resource utilization in smart cities. A study
published in the Journal of Cleaner Production reported that robotic systems integrated into waste
management operations can reduce municipal solid waste generation by 15%, supporting sustainable urban
development by minimizing landfill pressures and enhancing recycling efforts (source: Journal of Cleaner
Production).
Pest and Disease Monitoring: Early detection systems in agriculture, enabled by autonomous robots,
mitigate crop losses and reduce reliance on chemical pesticides. Research from the International Food Policy
Research Institute (IFPRI) highlights that robotic sensors and AI-driven monitoring platforms can decrease
pesticide use by 40%, promoting eco-friendly farming practices and safeguarding agricultural ecosystems
(source: IFPRI).
Soil Health Monitoring: Autonomous robots contribute to sustainable soil management. A study published
in Nature Communications demonstrated that robotic systems equipped with soil sensors can enhance soil
fertility by up to 20%, optimizing nutrient management practices and promoting soil conservation in
agricultural landscapes (source: Nature Communications).
58
7.3.4 Disaster Management
Search and Rescue: Robots accelerate disaster response efforts. During natural disasters, such as
earthquakes and hurricanes, autonomous aerial drones can survey affected areas and locate survivors more
efficiently than traditional methods. Research by Stanford University indicates that drone-assisted search and
rescue missions can reduce response times by 50% and increase survival rates by 30%, illustrating their
critical role in disaster resilience (source: Stanford University).
Disaster Risk Reduction: Robotics aids in disaster risk assessment and mitigation. According to the World
Bank, robotic mapping technologies have contributed to a 25% reduction in disaster-related damages
globally by enabling better preparedness and early warning systems in high-risk areas (source: World Bank).
Educational Tool: Robotics serves as a trans-formative educational tool in STEM disciplines. Studies by the
National Science Foundation (NSF) indicate that robotics programs in schools increase student engagement
in science and technology by 40% and improve academic performance in related subjects, inspiring the next
generation of innovators in sustainable technology (source: NSF) [18].
Resource Mapping: Robots aid in site selection and planning for renewable energy projects. Research from
the European Commission's Joint Research Centre highlights that robotic surveys can improve the accuracy
of resource assessments for solar and wind farms by 20%, enabling better-informed decisions that maximize
energy generation and minimize environmental impacts (source: European Commission).
59
Energy Efficiency: Robotics enhances energy efficiency across industries. Research published in Energy
Policy suggests that robotic automation in industrial processes can improve energy efficiency by 15%,
lowering overall energy consumption and greenhouse gas emissions while optimizing production outputs
(source: Energy Policy).
As illustrated in Table 7.2, autonomous exploration and mapping robots exhibit a diverse range of
capabilities that significantly contribute to sustainable development across various sectors. In environmental
monitoring and conservation, these robots excel in biodiversity assessment, covering vast areas and
providing detailed data on species populations and movements, while achieving high mapping accuracy of
up to 10 cm for habitat mapping. They also play a crucial role in pollution detection, identifying
contaminants at extremely low concentrations, which aids in early mitigation efforts. In urban planning and
smart cities, robots enhance infrastructure inspection by assessing numerous structures weekly, thereby
reducing maintenance costs and improving public safety.
60
They increase the efficiency of managing urban green spaces, contributing to better air quality and
environmental health, and optimize resource use through precise management systems. In agriculture and
food security, robots bolster precision farming by significantly boosting crop yields and reducing resource
inputs, while also monitoring pests and diseases to minimize crop losses. For disaster management, these
robots facilitate rapid search and rescue operations and enhance disaster risk reduction through accurate
mapping of high-risk areas. In scientific research and education, they enable efficient data collection from
remote locations and serve as engaging educational tools, promoting STEM learning and inspiring future
innovations. Additionally, in renewable energy management, robots enhance the operational efficiency of
solar and wind farms and improve site selection for renewable energy installations, supporting the global
shift towards cleaner energy sources. Finally, by automating tasks and optimizing energy use in various
industries, robots help reduce carbon emissions and enhance energy efficiency, thus contributing to overall
sustainability efforts.
61
Chapter 8
8.1 Summary
8.1.1 Summary of Objectives and Achievements
This project aimed to develop an autonomous exploring and mapping robot using the TurtleBot3 platform,
implemented in both simulation and hardware. The primary objective was to design and validate a robust
SLAM (Simultaneous Localization and Mapping) system leveraging TF Luna LiDAR and Raspberry Pi 4,
ensuring reliable navigation and mapping capabilities in diverse environments.
The initiative stemmed from the growing need for autonomous robotic systems capable of independently
navigating and understanding their surroundings, which has profound implications in various fields such as
search and rescue, logistics, and smart home automation. By focusing on both simulation and hardware
implementations, this project sought to provide a comprehensive solution that could be tested and validated
in a controlled environment before being applied to real-world scenarios.
2. Simulation and Hardware: The dual approach of simulation and hardware implementation was a
cornerstone of this project. In the simulation phase, various scenarios were modeled to test the
robustness of the SLAM algorithms. This phase allowed for extensive testing without the constraints
and risks associated with hardware. Once the algorithms were optimized in the simulation, they were
transferred to the hardware setup, where the TurtleBot3 demonstrated consistent performance,
validating the accuracy and reliability of the simulation results.
62
3. Mapping Accuracy: A significant achievement of this project was the high level of mapping
accuracy obtained. Both in simulation and hardware, the maps generated by the SLAM system were
precise and detailed. This accuracy was critical in enabling the robot to navigate effectively, avoiding
obstacles and covering the exploration area comprehensively. The use of TF Luna LiDAR played a
pivotal role in achieving this level of precision, providing high-resolution distance measurements that
were crucial for detailed mapping.
4. Autonomous Navigation: The robot's ability to navigate autonomously was thoroughly tested and
validated. The TurtleBot3 was able to move through various environments, avoiding obstacles and
dynamically adjusting its path based on real-time SLAM data. This demonstrated the robustness of
the integration between the SLAM system and the TurtleBot3's navigation algorithms, highlighting
the system's potential for real-world applications.
5. Importance of Simulation: The simulation phase proved to be an invaluable part of the project. It
allowed for extensive testing and optimization of the SLAM algorithms in a controlled environment,
significantly reducing the risk of errors during hardware implementation. The insights gained during
simulation were crucial for refining the system and ensuring its reliability.
6. Sensor Reliability: The choice of sensors was a critical factor in the project's success. The LiDAR
sensor provided the high-resolution data necessary for accurate mapping and navigation. This project
underscored the importance of selecting reliable and high-quality sensors to achieve the desired
performance levels in autonomous robotic systems.
7. Robustness in Design: Designing a robust system capable of handling real-world variations was
essential. The challenges faced during the project highlighted the need for flexible and adaptable
algorithms that could perform consistently across different environments. This experience
emphasized the importance of building systems that are resilient to changes and capable of
maintaining high performance under varying conditions.
63
8.1.3 Challenges Faced
1. Sensor Calibration: One of the foremost challenges encountered during the project was the
calibration of the LiDAR sensor. Accurate calibration was essential to ensure that the distance
measurements were reliable, which directly impacted the mapping accuracy. Extensive efforts were
made to fine-tune the calibration process, involving repeated trials and adjustments to achieve
optimal sensor performance.
2. Hardware Integration: Integrating the various hardware components posed its own set of
challenges. Ensuring seamless communication between the Raspberry Pi 4, LiDAR, and TurtleBot3
required meticulous planning and execution. The physical assembly of the components, along with
the software integration, demanded a high level of precision to avoid any disruptions in the system's
operation.
8.2 Conclusion
This project successfully developed an autonomous exploring and mapping robot using the TurtleBot3
platform, implemented in both simulation and hardware. The SLAM system, powered by TF Luna LiDAR
and Raspberry Pi 4, proved effective in enabling reliable navigation and mapping. The comprehensive
approach of utilizing both simulation and hardware allowed for thorough testing and validation, ensuring the
system's robustness and accuracy.
The challenges encountered, such as sensor calibration, hardware integration, and environmental variations,
provided valuable insights into the complexities of developing autonomous robotic systems. These
experiences underscored the importance of meticulous planning, precise execution, and the need for
adaptable and resilient system designs.
Overall, this project has laid a solid foundation for future advancements in autonomous robotic systems. The
successful implementation of the SLAM system and the lessons learned along the way contribute to the
ongoing development of more advanced and reliable autonomous robots, with potential applications across
various fields and industries.
64
References
[1]. Hassan Umari1 and Shayok Mukhopadhyay2: “Autonomous Robotic Exploration Based on Multiple
Rapidly-exploring Randomized Trees” in 2017 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS) September 24–28, 2017, Vancouver, BC, Canada.
[2]. Rapti Chaudhuri and Suman Deb: “LiDAR Integration with ROS for SLAM Mediated Autonomous
Path Exploration” in book: Advanced Computing and Intelligent Technologies, August 2022, pp. 225-
235.
[3]. Megalingam, R.K., Teja, C.R., Sreekanth, S., Raj, A.: ROS based autonomous indoor navigation
simulation using slam algorithm. Int. J. Pure Appl. Math. 118(7), 199–205 (2018).
[4]. A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart, “Receding horizon “next-best-view”
planner for 3d exploration,” in 2016 IEEE International Conference on Robotics and Automation
(ICRA), May 2016, pp. 1462–1468.
[5]. S. Mukhopadhyay and F. Zhang, “A path planning approach to compute the smallest robust forward
invariant sets,” in In proceedings of the American Control Conference, June 2014, pp. 1845–1850.
[6]. B. Yamauchi, “A frontier-based approach for autonomous exploration,” in Proceedings of the IEEE
International Symposium on Computational Intelligence in Robotics and Automation (CIRA ’97).
Washington, DC, USA: IEEE Computer Society, July 1997, pp. 146– 151.
[7]. S. M. Lavalle, “Rapidly-exploring random trees: A new tool for path planning,” Tech. Rep., 1998.
[8]. A. Franchi, L. Freda, G. Oriolo, and M. Vendittelli, “The sensor-based random graph method for
cooperative robot exploration,” IEEE/ASME Transactions on Mechatronics, vol. 14, no. 2, pp. 163–175,
April 2009.
[9]. Olalekan, A.F., Sagor, J.A., Hasan, M.H., Oluwatobi, A.S.: Comparison of two slam algorithms
provided by ROS (robot operating system). In: 2021 2nd International Conference for Emerging
Technology (INCET), pp. 1–5. IEEE (2021).
[10]. P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum
cost paths,” IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100–107, July
1968.
[11]. Pfrunder, A., Borges, P.V., Romero, A.R., Catt, G., Elfes, A.: Real-time autonomous ground vehicle
navigation in heterogeneous environments using a 3d LiDAR. In: 2017 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS), pp. 2601–2608. IEEE (2017).
[12]. H. Umari. (2016, Sept.) RRT exploration ROS package. Internet.
[13]. N. WAS, H. Hawari, and K.Kamarudin, “Analysis of Mobile Robot Indoor Mapping using
GMapping Based SLAM with Different Parameter,” IOP Conference Series: Material Science and
Engineering, vol. 705, p. 012037, 2019.
65
Gmapping based SLAM algorithm in 2nd International Moratuwa Engineering Research
Conference (Sri Lanka: Moratuwa) pp 403–8.
[15]. Analysis of Mobile Robot Indoor Mapping using GMapping Based SLAM with Different Parameter
To cite this article: W.A.S Norzam et al 2019 IOP Conf. Ser.: Mater. Sci. Eng. 705 012037.
[16]. STEM Education for the Twenty-First Century.
[17]. Robotics, Education, and Sustainable Development Conference: Proceedings of the 2005 IEEE
International Conference on Robotics and Automation, ICRA 2005, April 18-22, 2005, Barcelona,
Spain.
[18]. Impact of industrial robots on environmental pollution: evidence from China.
66