Navigation of TurtleBot using hand-gestures to find target locations marked with AR codes and/or to find a specific person using face-recognition
- Please visit http://abhipatil.me/portfolio/tbot_slam/
The following sections describe contents of this package and how to use them:
The following hardware is required for complete execution of project:
- TurtleBot 2 (with Kobuki base)
- Kinect (mounted on TurtleBot)
- A computer with webcam and installed with ROS Indigo and Ubuntu 14.04 (mounted/connected on TurtleBot)
- A second computer installed with ROS Indigo and Ubuntu 14.04 for visualization (Rviz) and hand gesture API
- A second depth camera is preferred (ASUS Xtion Pro or Kinect) for hand gesture recognition; this will be connected to the second computer
- Printed AR codes from 2 to 5 that could be placed anywhere around TurtleBot
The following needs to be setup in order to run all nodes:
- TurtleBot setup
- TurtleBot networking setup
- Second depth camera setup (ASUS Xtion Pro camera) - Please edit the
camera_idparameter value inasus_cam.launchwith appropriate value for your camera. To find yourcamera_idvalue, launch the OpenNI2 driver and look fordevice_id:
roslaunch openni2_launch openni2.launch
The following packages need to be installed:
- rtabmap_ros - RTAB-Map package
- openni2_launch - required if using ASUS Xtion Pro for hand gesture recognition
- freenect_launch - required for
3dsensor.launchwith TurtleBot navigation - ar_track_alvar - to recognize AR code tags and move TurtleBot towards them
This package consists the following nodes:
Navigation:
move_to_pose.py- this node is used to move TurtleBot to a specific pose and uses theMoveBaseActionandMoveBaseGoalmessages to do sorun_tbot_routine.py- this is the main node that performs the entire routine, combining various nodes, as outlined in the overview section; this node subscribes to the following topics:
a.ar_pose_marker- to determine the id and pose estimate of AR code
b.num_fingers- the detected number of fingers using hand gestures
c.face_names- get the names of people detected during face recognition mode
d.odom- this is required to know the current odometry of the robot and perform odom correction (implementation in progress)
Hand Gesture Recognition:
fingers_recog.py- this node takes a input image and outputs an image with detected number of fingersget_hand_gestures.py- this node subscribes to a depth image/asus/depth/image_raw, processes the image usingfinger_recog.pyand publishes the detected number of fingers at the topicnum_fingers. This node also outputs an image window showing the depth feed with hand and detected number of fingers.
Face Recognition:
train_faces.py- this node subscribes to a RGB image stream from webcam, detects faces, captures faces for training (using Fisherfaces algorithm) and saves the trained data in a xml file, to be used in face recognition.face_recog.py- this node subscribes to RGB image stream from webcam, loads the trained data file from above and performs face recognition.gui_face.py- this node launches a simple GUI making it easier for users to input their name, captures their faces, train the data and finally run the recognition API
This package consists the following launch files:
-
move_base_rtabmap.launch- (to be launched on TurtleBot computer) this file performs the following:
a. Launchesminimal.launchfromTurtleBot_bringuppackage
b. Runs themove_basenode for navigation
c. Runs thertabmapnode
d. Launchesalvar.launchfor AR code detection
e. Runs theusb_camnode for face recognition -
tbot_routine.launch:
a. Launchestbot_routine_rviz.launchand runs therviznode, opening up a Rviz visualization window
b. Launchesasus_cam.launch, launchingopenni2.launchwith customcamera_id
c. Runs theget_hand_gesturesnode
-
Turn on TurtleBot and ensure that networking is setup correctly
-
Connect ASUS Xtion Pro to your 'second' computer for hand gesture recognition
-
Source the TurtleBot workspace. For e.g, if your workspace is called
tbot_ws, enter in command line:source ~/tbot_ws/devel/setup.bash -
On TurtleBot computer, run:
roslaunch tbotnav move_base_rtabmap.launch -
On your 'second' computer, run:
roslaunch tbotnav tbot_routine.launch -
On your 'second' computer, in another terminal window, run:
rosrun tbotnav run_tbot_routine.py -
Follow the instructions on the window launched in (6)
- Object tracking: Replace AR code tracking and get TurtleBot to find specific objects in the environment
- RTAB-Map & beyond: Explore the capabilities of RTAB-Map and RGB-D SLAM to make the navigation more robust
- Simple is beautiful: Improve the overall execution of the project to make it more user interactive by making it simpler/easier
This project was completed as part of the MS in Robotics (MSR) program at Northwestern University.