Object Finder Robot
Object Finder Robot
Team Members:
Hamza Sameeh Dwaik
Moayad Amjad Hrebat
Motaz Iyad Natsheh
Supervisor:
Dr.Amal Al-Dweik
Mr.Wael AL Takrouri
Hebron - Palestine
January-2023
Acknowledgment
In the name of ”Allah”, the most beneficent and merciful who gave us strength,
knowledge and helped us to get through this project. For those who deserve our
thanks the most, our parents, we are indebted to you for the rest of our lives
for your unconditional love and support. We know that thanks is not enough
and there are not enough words to describe how thankful we are.To our families
and friends, thank you for your endless encouragement all our lives and especially
during the completion of this project.We would like to thank our supervisors of
this project, Dr.Amal Al-Dweik and Mr.Wael AL Takrouri help us and advice
during this project.
We also thank our faculty and Professors at the College of Information Tech-
nology and Computer Engineering for their hard work and support to the students
i
Abstract
We frequently spend a lot of time looking for objects that we don’t remember
where put it leave consequently, technological assistance for users some individuals
find the process of looking for a specific object to be boring or annoying, This It
is particularly difficult for people with special needs and when there are several
objects that are challenging for a person to identify all at the same time solution
we suggest is an intelligent mobile robot that will interact with a mobile to detect
objects according to user request by using artificial intelligence YOLO and ROS
algorithm. The purpose of this system is to find objects specified by the user and
returns the objects coordinates within the generated map.
In this project, we achieved results, which is detect the object in the places
where we inserted the robot. The speed of detect the object was good, but the
accuracy was less. We were also able to calculate the distance between the robot
and the object, but not with high accuracy.
ii
mÌ '@
éC
úÍAJËAK. ð AëAJªð áK @ Q»YJK B ZAJ @ á« IjJ áÓ QJºË@ úæ® K AÓ AJËA«
. Ë@ ú¯ I ¯ñË@ .
éJ . ªð , éj «QÓ ð @ éÊÜ
.
Ø àñºK áªÓ Zúæ á« IjJ . Ë@ éJ ÊÔ« à@ X@Q¯B@ ªK. Y® JªK
ZAJ B@ áÓ YK YªË@ ¼AJë àñºK AÓYJ«ð éA . AJJkB@ ø ð X Am CË Ag ɾ
mÌ '@ HAg .
AKñK. ðP h Q® K ñë ÉmÌ '@ I ¯ñË@
®K ú¯ AêªJÔg AîDÊ« ¬QªJË@ jË@
. úΫ I.ª úæË@
I.Ê£ I.k áªÓ úæ á« IjJ . ÊË ÈñÒjÖÏ @ KAêË@ úΫ J J.¢ ©Ó É«A®JK É ® JJÓ AJ»X
@ Yë áÓ QªË@ . ROS éJK. ð YOLO ú«AJ¢B@ ZA¿YË@ éJ ÓPP@ñk Ð@YjJAK. ÐYjJÖÏ @
HA K@Yg@ ¨Ag. P@ ð ÐYjJÖÏ @ ÉJ.¯ áÓ èXYjÖÏ @ HA
JKA¾Ë@ HAJ JKA¾Ë@ úΫ PñJªË@ ñë ÐA¢JË@
. KAêË@ J J.¢ ú¯ é¢
QmÌ '@ úΫ AîDQ«ð
ÈAgX@ Õç' úæË@ á» AÓ B@ ú¯ áKA¾Ë@ ¬A »@ ùëð l' AJK AJ® ®k , ¨ðQåÖÏ @ @ Yë ú¯
.
A @ AJºÖß . ɯ @ é¯YË@
I KA¿ áºËð èYJk áKA¾Ë@ ¬A »@
.
é«Qå
I KA¿ IJ k .AîD¯ HñK . ðQË@
. éJ ËA« é¯YK
Ë áºËð, ZúæË@ð HñK
. . ðQË@ áK. é¯AÖÏ @ H. Ak áÓ
HñK . ðP, ¡@QmÌ '@ Õæ P , ©¯ñÖ Ï @, Yg@ð I ¯ð ú¯ , ZAJ B@ YK Ym' : éJ kAJ®ÖÏ @ HAÒʾË@
.ÈñÒm×
iii
Table of Contents
List of Figures vi
1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Project Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Short Description Of The System . . . . . . . . . . . . . . . . . . . 2
1.5 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.6 List Of Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.6.1 Functional Requirement: . . . . . . . . . . . . . . . . . . . . 2
1.6.2 Nonfunctional Requirement: . . . . . . . . . . . . . . . . . . 3
1.7 Overview Of The Rest Of Report Sections . . . . . . . . . . . . . . 3
2 Background 4
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Theoretical Background . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.1 YOLO (You Only Look Once) Algorithm . . . . . . . . . . . 4
2.2.2 ROS (Robot Operating System) . . . . . . . . . . . . . . . . 6
2.2.3 Turtlebot2 Motion . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.4 Mapping Algorithm . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 The System Components And Design Options . . . . . . . . . . . . 11
2.4.1 Hardware Components and Options: . . . . . . . . . . . . . 11
2.4.2 System Software Components: . . . . . . . . . . . . . . . . . 19
3 System Design 23
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Detailed Description Of The System . . . . . . . . . . . . . . . . . 23
3.3 System Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.1 System Block Diagram . . . . . . . . . . . . . . . . . . . . . 24
3.3.2 Schematic Diagram . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Pseudo-Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
iv
TABLE OF CONTENTS TABLE OF CONTENTS
3.5 Adjustments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 System Implementation 31
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 Hardware Implementation: . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Software Implementation: . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.1 Operating System . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.2 Installing needed packages . . . . . . . . . . . . . . . . . . . 33
4.3.3 OpenCV implementation . . . . . . . . . . . . . . . . . . . . 33
4.3.4 Object detection implementation . . . . . . . . . . . . . . . 33
4.3.5 YOLO Implementation . . . . . . . . . . . . . . . . . . . . . 34
4.3.6 ROS implementation . . . . . . . . . . . . . . . . . . . . . . 34
4.3.7 ROS algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.4 Mobile Application Implementation . . . . . . . . . . . . . . . . . . 35
4.4.1 MQTT Protocol . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.4.2 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5 Implementation issues and challenges . . . . . . . . . . . . . . . . . 38
v
List of Figures
vi
List of Tables
vii
List of Acronyms
viii
Chapter 1
Introduction
1.1 Overview
In our project, we are going to develop a robot that will interact with a mobile to
find objects according to user selections. The purpose of this system is to search
objects specified by the user that are sometimes difficult for humans to find. The
proposed system is supposed to save time and efforts.
This chapter presents a general idea about the project, overview, motivation
and importance, objectives, short description of the system, problem statement,
list of functional and nonfunctional requirements, and an overview of the rest of
report sections.
1.2 Motivation
One of the most important motivations for the project is to help people and spe-
cially those with special needs in particular to search for objects that exist in closed
or limited areas. It can be used for many different jobs and functions that may be
too boring, difficult or dangerous for a human to do.
1
1.4. SHORT DESCRIPTION OF THE SYSTEM CHAPTER 1. INTRODUCTION
2
1.7. OVERVIEW OF THE REST OF REPORT SECTIONS CHAPTER 1. INTRODUCTION
3
Chapter 2
Background
2.1 Overview
This chapter introduces the theoretical background of our project, literature re-
view, the system components, and design options.
YOLO Methodology
First of all, the algorithm divides the image into N grids, each with an equal di-
mensional regions of SxS. Each of these N grids is responsible for the detection and
localization of the object it contains. These grids predict bounding box coordinates
relative to their cell coordinates as shown in figure 2.1 [4].
4
2.2. THEORETICAL BACKGROUND CHAPTER 2. BACKGROUND
• Width (bw ).
• Height (bh ).
• Class (for example, person, car, etc.) and this is represented by the letter c.
To illustrate the above properties Figure,shows an example for the bounding box
that has been represented by a yellow outline:
5
2.2. THEORETICAL BACKGROUND CHAPTER 2. BACKGROUND
ROS Methodology
ROS consists of a code and tools that help you run your project code and do the
required task. ROS is designed to be a loosely coupled system where a process is
called a node and every node should be responsible for one task. Nodes communi-
cate with each other using message passing via logical channels called topics. Each
node can send or get data from the other nodes using the publish/subscribe model.
Software in ROS is organized in packages. A package might contain ROS nodes, a
ROS-independent library, a dataset, configuration files, a third-party piece of soft-
ware, or anything else that logically constitutes a useful module. The goal of these
packages is to provide this useful functionality in an easy-to-consume manner so
that software can be easily reused. In general, ROS packages follow a ”Goldilocks”
principle: enough functionality to be useful, but not too much that the package is
6
2.2. THEORETICAL BACKGROUND CHAPTER 2. BACKGROUND
ROS has multiple versions,currently there are two main versions that are sup-
ported[21]:
• ROS Kinetic Kame release stopped support in April, 2021 but you can used
it.
7
2.2. THEORETICAL BACKGROUND CHAPTER 2. BACKGROUND
2. ROS has great simulation tools. such as Rviz and Gazebo that enable the
unreal run of robot.
3. It can control multiple robots.
4. It doesn’t take much space and resources.
5. It is an open-source project with a permissive license.
Gmapping
Gampping is one method that can be used for path planning in robotics. In this
approach, the robot constructs a map of its environment using sensors or other
means, and then uses this map to plan a path to the desired destination. Gampping
algorithms can be used to generate paths that are optimal in some sense, such as
being the shortest or fastest path, or paths that minimize the risk of collision with
obstacles. Gampping algorithms can also be used to plan paths that avoid certain
areas or follow specific routes.
8
2.3. LITERATURE REVIEW CHAPTER 2. BACKGROUND
1) A*Planning Algorithm
It is a tracking algorithm that determines the path from the current location of the
robot to a specific goal point while avoiding obstacles and giving greater priority
to the goals that are closer with lower costs[15].
2) D*Planning Algorithm
The A* algorithm assumes that the entire environment is known, but there may
be moving obstacles.To include this, D* has been proposed, as it aims to plan the
effective course of the unknown and dynamic environments[15].
Mapping Methodology
The map generation is carried out by first, generating the environment and import-
ing turtlebot in Gazebo. The mapping process is implemented using the gmapping
package. To use gmapping the robot model must provide odometry. it is the use
of data from motion sensors to estimate change in position over time. It is used
in robotics by some legged or wheeled robots to estimate their position relative to
a starting location. This method is sensitive to errors due to the integration of
velocity measurements over time to give position estimates. Rapid and accurate
data collection, instrument calibration, and processing are required in most cases
for odometry to be used effectively.
The main idea of that project this [14] is,to design a system for detecting and
tracking objects based on their color.Its mechanism of action is to take pictures
of the object continuously through the camera that is interfaced to the Raspberry
Pi. When it is detected, the robot tracks the target. In this project, the object
was only the ball and detection of object is not fast. In our project, there will be
more than one object to be detected and the used algorithms makes the object
detection faster.
9
2.3. LITERATURE REVIEW CHAPTER 2. BACKGROUND
The main idea of the project is to design and implement a sanitizer spider robot
using Raspberry Pi by using some sensors to sense by the camera. The camera
captures an image for the suspected objects to be infected. Then, it moves in
the affected area, and it can measure the distance of objects for example, Chair
or Table after that it determines the desired object and then sterilizes it. The
main similarity between our project and the “Sanitizer Spider Robot”, is the use
of some algorithms, the most important of which is the YOLO algorithm, as there
is a similarity in some hardware components such as the camera and some sensors.
The difference between our work and listed previous works is presented in Table
2.1:
10
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
11
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
We have studied the possible options, compared them, and choose the most
suitable for our project.Which are presents in the Table 2.2.
We chose the Raspberry Pi 3 Model B because it is the lowest price and avail-
able in the local market, below is a more technical description about it.
12
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
Raspberry pi 3
13
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
14
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
Depth Camera
We need a camera to capture what resides in front of it and know what is moving
in front of it.
Table 2.4: List of Depth Cameras Options
We chose Xbox 360 Kinect because it has an infrared projector, infrared cam-
era, and color camera. It’s a great imaging tool, even for robots. It enhances
the range and autonomous nature of robots and has a lower cost, and the depth
resolution is better.
15
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
Xbox Kinect
The Kinect is a depth camera, and it contains three vital pieces that work together
to detect your motion and create your physical image on the screen: an RGB color
VGA video camera, a depth sensor, and a multi-array microphone. The camera
detects the red, green, and blue color components as well as body-type and facial
features. that can judge depth and distance to take photography to new levels.
It uses the known speed of light to measure distance, and effectively calculates
the amount of time it takes for a reflected beam of light to return to the camera
sensor. In our project we will use Kinect camera and it use in Video games in
Xbox device [11].
Mobile Robot
We need a robot to design the project and to link the components together and
communicate with each other, which will be given instructions by the user through
the mobile application. Table 2.5 presents the list of options of such robot.
16
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
17
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
TurtleBot 2
TurtleBot 2 is the world’s most popular low cost, open-source robot for education
and research. This second edition TurtleBot robot is equipped with a powerful
Kobuki robot base and a trays for the installation of these components as shown
in Figure 2.4 below. All components have been seamlessly integrated to deliver
an out-of-the-box development platform. This robot officially proposed by Willow
Garage to develop in the operating system dedicated to robotics (ROS) [12]. The
Turtelbot2 have some components [19]:
• Kobuki Base is a low-cost mobile research base designed for education and
research on state of art robotics. Kobuki provides power supplies for an
external computer as well as additional sensors and actuators such as wheel
encoder and Dc motor and motor driver. Its highly accurate odometry,
amended by our factory calibrated gyroscope, enables precise navigation and
contains 1x4S1P battery 2200 mAh, Battery charger, USB communication
cable.
18
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
We need a algorithm for detecting objects. and It should have high quality spec-
ifications and easy to programming. Table 2.6: lists Object detection algorithms
options.
Table 2.6: List Object Detection Algorithms Option.
19
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
OpenCV Library
TensorFlow
TensorFlow is a free and open-source software library for machine learning and
artificial intelligence. It can be used across a range of tasks but has a particular
focus on training and inference of deep neural networks. TensorFlow was developed
by the Google Brain team for internal Google use in research and production.
TensorFlow can be used in a wide variety of programming languages, most notably
Python, as well as JavaScript, C++, and Java. This flexibility lends itself to a
range of applications in many different sectors [20].
20
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
We chose flutter framework because it has a high performance and easy for be-
ginners to programming and the User interface is easy to use as shown in Table 2.7
and the flutter have a feature called hot Reload while your application is running,
you can make changes to the code and apply them to the running application. No
recompilation is needed, and when possible, the state of your application is kept
intact.
21
2.4. THE SYSTEM COMPONENTS AND DESIGN OPTIONS CHAPTER 2. BACKGROUND
Flutter Framework
22
Chapter 3
System Design
3.1 Overview
In this chapter, we will explain the abstract block diagram of the system. Next,
show the detailed description of the system and the detailed design for each com-
ponent including its schematic diagram will be introduced.Finally, we will explain
the schematic diagram of the hardware components of the system.
TurtleBot will be used since it is a low-cost, personal robot kit with an open-
source hardware platform that has a mobile base and is supported by ROS and
localization and mapping and vision and obstacle avoidance [12]. A depth camera
is used as a special camera that can determine depth and distance to take image
of new levels. It uses the know speed of light to measure distance, and effectively
calculates the amount of time it takes for a reflected beam of light to return to the
camera sensor.
In our project, we will use Xbox 360 Kinect camera.It contains three vital pieces
that work together to detect your motion and create your physical image on the
screen: an RGB color VGA video camera, a depth sensor, and a multi-array
microphone.
23
3.3. SYSTEM DIAGRAMS CHAPTER 3. SYSTEM DESIGN
24
3.3. SYSTEM DIAGRAMS CHAPTER 3. SYSTEM DESIGN
3.LIDAR Sensor: It checks if there is any object in front of the system in the
range of about 4 meters for 240º degree detection then sends a signal to raspberry
pi to give an order to the Xbox Kinect camera to capture an image.
5.Mobile Application: The user selects the object through the application
and sends the data as input. The result of the search for an object is displayed as
output in the mobile application.
25
3.3. SYSTEM DIAGRAMS CHAPTER 3. SYSTEM DESIGN
26
3.3. SYSTEM DIAGRAMS CHAPTER 3. SYSTEM DESIGN
in in motor driver then connect In3 and in4 pins to GPIO23 and GPIO20 pins in
Raspberry Pi and Enable2 in motor driver pin to GPIO21 pin in Raspberry Pi.
Figure 3.2 shows The connect between Raspberry Pi and Xbox Kinect camera.
Figure 3.3: Schematic diagram for Xbox Kinect Camera with Raspberry Pi
27
3.3. SYSTEM DIAGRAMS CHAPTER 3. SYSTEM DESIGN
28
3.4. PSEUDO-CODE CHAPTER 3. SYSTEM DESIGN
3.4 Pseudo-Code
3.5 Adjustments
When we started working on the project, we ran into some issues that forced us
to make some changes:
1.Raspberry pi 3 model B
The Raspberry Pi 3 model B, which we intended to use in the project, has been
replaced by a laptop Because the Raspberry Pi 3 model B a rather weak processor
and a one gigabyte of memory, and when we ran the whole project, the processor
consumption was 100%, and the memory was over 80% and Figure 3.6 shows this
case , and therefore there is a not of responding in the system Significantly.This
is because the YOLO algorithm requires a fairly powerful processor and consumes
a large part of the memory. The mapping algorithm also consumes part of the
processor and memory, so the two algorithms cannot be run at the same time. so
we replaced Raspberry Pi .
29
3.5. ADJUSTMENTS CHAPTER 3. SYSTEM DESIGN
2.URG-LIDAR Sensor
The URG-LIDAR sensor, which we had intended to use in the project, we
encountered problems programming in the ability of the robot to avoid obstacles
by LIDAR sensor, and to solve that we replaced it with a kinect camera XBOX
360 because it contains a laser to sense what is in front of it.
30
Chapter 4
System Implementation
4.1 Overview
This chapter covers the software and hardware implementation of the project, as
well as the various components and tools needed to construct the robot
2. We connect XBOX 360 Kinect Sensor to laptop with adapter power supply
Cable by USB and connected power supply with kobuki 12V/5A.
31
4.2. HARDWARE IMPLEMENTATION: CHAPTER 4. SYSTEM IMPLEMENTATION
Figure 4.4 shows the Xbox 360 Kinect sensor to laptop with adapter power supply
Cable by USB and connected power supply with kobuki 12V/5A
32
4.3. SOFTWARE IMPLEMENTATION: CHAPTER 4. SYSTEM IMPLEMENTATION
33
4.3. SOFTWARE IMPLEMENTATION: CHAPTER 4. SYSTEM IMPLEMENTATION
Gmapping
The map is drawn by the movement of the robot in a specific location, with distance
sensors reading the distances between the robot and nearby obstacles, and during
the drawing the robot evreything around is being discoved.
A.Inputs:
B. Outputs:
The map
34
4.4. MOBILE APPLICATION IMPLEMENTATION CHAPTER 4. SYSTEM IMPLEMENTATION
Gmapping result:
35
4.4. MOBILE APPLICATION IMPLEMENTATION CHAPTER 4. SYSTEM IMPLEMENTATION
dependencies:
flutter:
sdk: flutter
mqtt_client: ^9.3.1 #^7.2.1
provider: ^5.0.0 #^4.1.3+1
The library mqtt-client is a plugin that Responsible for exchanging data between
the application and the robot so that it achieves effectiveness and ease of dealing.
The library provider is a plugin that easy to use package which is basically a
wrapper around the Inherited widgets that makes it easier to use and manage. It
provides a state management technique that is used for managing a piece of data
around the application[27].
36
4.4. MOBILE APPLICATION IMPLEMENTATION CHAPTER 4. SYSTEM IMPLEMENTATION
OpenPainter({this.x_value, this.y_value}):
This class obtains the coordinates that were sent from the robot and draws and
set these coordinates in the form of points within the map created by the robot
37
4.5. IMPLEMENTATION ISSUES AND CHALLENGES
CHAPTER 4. SYSTEM IMPLEMENTATION
• When running the ROS environment and the YOLO algorithm in Raspberry
Pi OS, we faced a problem that the system stopped responding, because the
Ros environment uses more than half of the processing in cpu and the Yolo
algorithm, when it runs, consumes a large amount of processing,and to solve
that we replaced the Raspberry Pi 3 with a Laptop and when running the
ros environment with the Yolo algorithm it was much better.
• We faced a problem when using the Kinect Camera to identify the object
using the Yolo algorithm and Gmapping algorithm to draw the map through
it, but the camera port cannot be used for the two algorithms at the same
time, so to solve that we used kinect Camera for the Gmapping algorithm
to draw the map and used the webcam on the laptop for the Yolo algorithm
to identify the object.
38
Chapter 5
5.1 Overview
This chapter explains the project component testing methodology and displays
the project system implementation outcomes.
We tested a self-motion for the robot, robot and we checked if it was moving
correctly, and we written python code to avoiding obstacles during movement and
the python code is exist in Appendix B.
39
5.3. SOFTWARE TESTING CHAPTER 5. VALIDATION AND TESTING
40
5.3. SOFTWARE TESTING CHAPTER 5. VALIDATION AND TESTING
41
5.3. SOFTWARE TESTING CHAPTER 5. VALIDATION AND TESTING
42
5.4. SYSTEM VALIDATION CHAPTER 5. VALIDATION AND TESTING
43
Chapter 6
6.1 Conclusion
In this project, we have presented an approach to building a prototype intelligent
mobile robot when it moves map is drawn . We created a communication between
the smartphone and the laptop using mqtt protocol to send the objects coordinates
from the robot to the mobile application The robot also can detect objects using
Yolo algorithm and localize a certain object within its environment, and develped
a mobile application to determine coordinates of objects in the map. It can be
concluded that higher speed to detect objects in yolo algorithm could be achieved
with using higher CPU and using GPU with microcontroller.
44
Appendix A
45
Appendix B
avoid_obstacle.py script
import rospy
from move_base_msgs.msg import MoveBaseAction, MoveBaseGoal
class GoForwardAvoid():
def __init__(self):
rospy.init_node('nav_test', anonymous=False)
rospy.on_shutdown(self.shutdown)
self.move_base = actionlib.SimpleActionClient("move_base", MoveBaseAction)
rospy.loginfo("wait for the action server to come up")
self.move_base.wait_for_server(rospy.Duration(5))
goal = MoveBaseGoal()
goal.target_pose.header.frame_id = 'base_link'
goal.target_pose.header.stamp = rospy.Time.now()
goal.target_pose.pose.position.x = 3.0
goal.target_pose.pose.orientation.w = 1.0
self.move_base.send_goal(goal)
success = self.move_base.wait_for_result(rospy.Duration(60))
if not success:
else:
state = self.move_base.get_state()
if state == GoalStatus.SUCCEEDED:
rospy.loginfo("Hooray, the base moved 3 meters forward")
if __name__ == '__main__':
try:
GoForwardAvoid()
except rospy.ROSInterruptException:
rospy.loginfo("Exception thrown")
46
drawing coordinate on the map Ganvas.dart
[2]“An article on the average number of days a person spends searching for a particular
object?” May, 2017,. [Online]. Available: https://www.prnewswire.com/news-
releases/lost-and-found-the-averageamerican-spends-25-days-each-year-looking-for-
lost-items-collectively-costing-us-households-27-billionannually-in-replacement-costs-
300449305.html/.
[5] “Introduction to yolo algorithm for object detection,” Section. [Online]. Available:
https://www.section.io/engineering-education/introduction-to-yolo-algorithm-for-object-
detection/. [Accessed: 21-Dec-2022].
[7] “An introduction to robot operating system (ROS) - technical articles,” All About
Circuits. [Online]. Available: https://www.allaboutcircuits.com/technical-articles/an-
introduction-to-robot-operating-system-ros/. [Accessed: 11-Dec-2022].
[11] “How does the xbox kinect work,” How It Works: Xbox Kinect. [Online]. Available:
https://www.jameco.com/Jameco/workshop/howitworks/xboxkinect.html. [Accessed: 10-
Nov-2022].
48
[12] “A ‘Getting started’ guide for developers interested in robotics,” Learn TurtleBot
and ROS. [Online]. Available: https://learn.turtlebot.com/. [Accessed: 5-Jun-2022].
[13] “Build apps for any screen,” Flutter. [Online]. Available: https://flutter.dev/.
[Accessed: 30Dec-2022].
[14] Object detection and tracking system: Oraib Daas, Safa Shehada June-2017, Dr.
Ashraf Armoush, Dr.Emad Natsheh ,[Online]
https://repository.najah.edu/handle/20.500.11888/14334?show=full.
[15] Autonomous Wheelchair Project: Akram abu ayyash ,Islam warasna , June-2021 ,
Dr. mohammad aldesht ,[Online]
https://scholar.ppu.edu/handle/123456789/38/browse?type=author&value=abu+ayyash
%2C+Akram.
[18] Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy
Leibs, Rob Wheeler, Andrew Y Ng, may-2009., ROS: an open-source Robot Operating
System.
[19] “Online store for robotic products supported by Ros,” ROS Components. [Online].
Available: https://www.roscomponents.com/en/. [Accessed: 27-Dec-2022].
49
[26] C. Bernstein, K. Brush, and A. S. Gillis, “What is MQTT and how does it work?,” IoT
Agenda, 27-Jan-2021. [Online]. Available:
https://www.techtarget.com/iotagenda/definition/MQTT-MQ-Telemetry-Transport.
[Accessed: 27-Dec-2022].
50