Selection of Obstacle Avoidance Behaviors Based On Visual and Ultrasonic Sensors For Quadruped Robots
Selection of Obstacle Avoidance Behaviors Based On Visual and Ultrasonic Sensors For Quadruped Robots
net/publication/221786637
CITATIONS READS
4 123
4 authors, including:
Maki Habib
American University in Cairo
289 PUBLICATIONS 3,082 CITATIONS
SEE PROFILE
All content following this page was uploaded by Maki Habib on 04 June 2014.
1. Introduction
Robots are indispensable today to improve process efficiencies and labor savings in the
industry and service sector. The importance of robots has also been recognized for work in
critical environment, such as, space, ocean bottom, power plants, as well as, in the fields of
clinical medicine, hazard prevention, etc. For this, a large number of robots have been
developed, and researchers continue to design robots with greater capabilities to perform
more challenging and comprehensive tasks (Hirose et al., 1986; Ooka et al., 1986; Cruse et
al., 1994; Chen et al., 2002a; Habib, 2003a). There are many ways for a robot to move across a
solid surface in which wheels, crawlers, and legs were common options for the available
robots. The application fields of such robots are naturally restricted, depending on the
condition of the ground. Wheeled mobile robots are mechanically simple, easy to construct,
easy to implement a controller, dynamically stable in general, and they are ideal for
operation on level and hard surfaces. When the surface is rough and has projections and
Open Access Database www.i-techonline.com
depressions with dimensions that are greater than the diameter of the wheel or when the
surface is soft, resistance to the movement increases drastically and their function as
transport machines is almost lost, which leads to poor performance. The crawler type
locomotion mechanisms have traverse ability higher than that of the wheel, but its control is
hard and the dead-reckoning is difficult to realize, though it is possible to move on different
terrains. In order to have good mobility over uneven and rough terrain a legged robot seems
to be a good solution because legged locomotion is mechanically superior to wheeled or
tracked locomotion over a variety of soil conditions and certainly superior for crossing
obstacles. The path of the legged machine can be (partially) decoupled from the sequence of
footholds, allowing a higher degree of mobility. This can be especially useful in narrow
surroundings or terrain with discrete footholds (Raibert, 1986; Hirose, 2001).
However, creating and controlling a legged machine that is powerful enough, but still light
enough is very difficult. Legged robots are usually slower and have a lower load/power
ratio with respect to wheeled robot. Autonomous legged robots have distinct control issues
that must be addressed. These problems are amplified when the robot is small with an on-
board controller that is purposely simple to accommodate weight and expense restrictions.
The kinematics and dynamics of legged robots are nonlinear, while robot parameters, such
as center of mass position, amount of payload, etc. are not known exactly and might also
Source: Bioinspiration and Robotics: Walking and Climbing Robots, Book edited by: Maki K. Habib
ISBN 978-3-902613-15-8, pp. 544, I-Tech, Vienna, Austria, EU, September 2007
108 Bioinspiration and Robotics: Walking and Climbing Robots
vary (Nishikawa et al., 1998). In addition, it is difficult to estimate states of the system (Pugh
et al., 1990). The system might be unstable without control, and the goal of keeping balance
is difficult to be decomposed into actuator commands. A legged system has a lot of degrees
of freedom in which a high motion performance and ground adaptation ability on irregular
terrain can be demonstrated. In order to allow a completely decoupled motion over
irregular terrain, at least three degrees of freedom per leg are required. Two joints would be
enough to place the foot in any desired position, and with the third joint, the robot can climb
over much larger obstacles relative to its size and also can climb a slippery hill that a leg
with two joints can not perform. But, this will result in using 12 actuators for a four-legged
robot, which yields to increase weight and control complexity compared to six actuators for
a traditional industrial manipulator (Waldron et al., 1984). Contact forces, in general, only
allow pushing the feet into the surface, not pulling. This directly limits the total downwards
acceleration that can be applied to a walking machine. This initiates a challenge to
investigate the technical problems involved in the realization of a robot that uses legs to
navigate in difficult, partially unstructured and unknown environments.
Navigating and avoiding obstacles in real-time and in real environment is a challenging
problem for mobile robots in general, and for legged robots in specific. There is a large body
of work devoted to the navigation of wheel-based mobile robots. Some common approaches
are odometry, inertial navigation [3] and landmark navigation. The navigability of an
autonomous multi-legged system is a crucial element of its overall capabilities (Go et al.,
2006). Biological systems have a tightly integrated action perception cycle. Hence, for
walking robots, to realize their full potential, distal environment sensing must be tightly
integrated with the walking cycle. Distal sensing is crucial to allow anticipatory gait
adjustment to accommodate varying terrain. Close coupling of the visual and locomotor
cycle can lead to rapid, adaptive adjustment of the robot (Lewis, 2002). This problem is even
more difficult when the robot is unable to generate accurate global models of the obstacles
in its environment. Determining an optimal navigation policy without this information can
be difficult or impossible. A legged mobile robot is a free roving collection of functions
primarily designed to reach a target location. Equipping robots with more sensors increases
the quantity and reliability of information the robot can extract from its environment to
support robot’s intelligent behavior (Ferrell, 1994). In order to facilitate flexible obstacle
detection and avoidance techniques, it is necessary to acquire the 3-dimensional (3D)
information about the surrounding environment. Generally, 3D information is acquired
through external sensors, such as binocular cameras, ultrasonic sensors (Ohya et al., 1997),
laser range finders, etc. However, a high computational cost is required to analyze 3D
information because the binocular camera needs to process two frames from two cameras
(Okada et al., 1999, Okada et al., 2003). In addition, although the ultrasonic sensor can
accurately measure the distance to an object, there is a difficult problem in determining the
azimuth. Therefore, it remains a challenging task to build a robust real-time obstacle
avoidance system for a robot using vision data.
of freedom link-wire mechanism and a rotating mechanism which rotates the planar. Hence,
this leg mechanism has 3DOF. One of the characteristics of this leg is usage of wire and
pulley driving system within the leg. The feet of TITAN VIII can be used also as wheels in
order to achieve faster motion on flat surfaces. TITAN VIII walks in a walking posture
jutting out its legs to each side. This is standard walking posture of TITAN VIII. In such a
walking posture, a good energy efficiency can be achieved (Arikawa & Hirose, 1996; Hirose
& Arikawa, 2000). An ART-based Fuzzy controller for the adaptive navigation of a
quadruped robot has been developed (Chen et al., 2002b), and then different type of sensors
has been integrated with the robot to support its navigation (Yamaguchi et al., 2002a;
Yamaguchi et al., 2002b). Visual and ultrasonic sensors have been integrated with the
quadruped robot. The aim of these sensors is to detect and acquire 3D information of
obstacles along the path of the robot. The first sensor was the USB camera. The camera was
fixed at the front side of the robot body (See Figure 1.(b)). In addition, three ultrasonic
sensors have been used and configured at the tip of each of the front legs (See Figure 1.(c)).
The obstacle is roughly measured by processing the image acquired through the USB
camera, and the ultrasonic sensors are used to complement the visual information in
relation to obstacle and to perform the selection of the suitable actions at the right time. In
order to facilitate this process, a set of behavioral actions is decided, designed and
implemented. Currently, the main actions in the list include: default, detour, striding, and
climbing-over obstacles actions. Thus, fusing information through the use of different and
multiple sensors separately according to the situation and obtaining the information
necessary for obstacle avoidance can support the right decision to select the suitable set of
actions to avoid obstacles in real-time.
Wc
Obstacle
X
Image
L Image
w
Y
α Obstacle h
y
Camera x
The obstacle width W is calculated by using parameters in the image, such that
Wc
W= w (1)
X
where
α
Wc = 2 L tan (2)
2
In an exploratory experiment, the acquisition range Wc became 220 [mm] when the distance
L was set to 300 [mm]. Therefore, the projection angle α was set to 40 [deg].
Next, parameters in a vertical direction are defined as listed in Table 3 and the
corresponding side view is shown in Fig. 3. The obstacle height is calculated by using
parameters defined for the vertical direction, such that
2H c
H= h (3)
Y
112 Bioinspiration and Robotics: Walking and Climbing Robots
where H c is given by
β
H c = L tan (4)
2
and L1 is given by
Ic
L1 = (5)
β
tan
2
In an exploratory experiment, the acquisition range H c became 80 [mm] when the distance
L was set to 300 [mm]. Therefore, the projection angle β was set to 30 [deg].
Hc
Image Obstacle
Camera
Lh
β H
Ic
L1 L2
§ S ·
Z o = ¨ 1 − 1¸ L (6)
¨ S0 ¸
© ¹
Selection of Obstacle Avoidance Behaviors
Based on Visual and Ultrasonic Sensors for Quadruped Robots 113
Vanishing point
Vanishing line
z S0 Z0
x
y
S1
Image acquisition
Shade image
Threshold
Binarization image
P1
Vl
Y 2
Obstacle
P2
Y
w = x2 − x1 (7)
h = y2 − y1 (8)
where the image point ( x1 , y1 ) is for the apex P1 and similarly ( x2 , y2 ) is for the apex P2 .
(2) Case of two surface detection
When the acquired image includes the front and top surfaces as shown in Fig. 7, the width
w and the height h of the obstacle front surface are given by
w = x3 − x1 (9)
h = y1 − y4 (10)
P2 P6 P6 P2
P4 P5 P5 P4
Obstacle Obstacle
P1 P3 P3 P1
Y Y
w = x4 − x1 (11)
h = y1 − y4 (12)
Selection of Obstacle Avoidance Behaviors
Based on Visual and Ultrasonic Sensors for Quadruped Robots 115
using the image coordinates for apexes P1 and P4 . In this case, the height of the rear surface
is the distance between the points P2 and P3 , and the width of the rear surface is the
distance between the points P2 and P5 .
0 X 2 X x 0 X 2 X x
P5 P2 P2 P5
P3 P3 P6
P6
P4 P4
Obstacle Obstacle
P1 P1
Y Y
0 X 2 X x 0 X 2 X x
P2 P2
P4 P6 P3 P6 P4
P3
Obstacle Obstacle
P1 P1
Y Y
y y
Left side Right side
h = y1 − y6 (14)
In this case, the vanishing point V is the point that the straight line passing through the
points P1 and P3 intersects the straight line passing through the points P2 and P6 . In this
situation, the width of the rear surface for the obstacle is defined as the distance between the
point P2 and the intersection point at which the horizontal line passing through the x
coordinate of point 2 and parallel to the line passing through the points P4 and P6 ,
intersects the straight line passing through the points V and P4 . The height of the rear
surface is the distance between points P2 and P3 .
4. Design of Actions
Primitive actions with different level of abstraction have been designed and implemented to
support formulating the behavior of a robot using a combination of these actions. In general,
the description of an action set can have the following form,
116 Bioinspiration and Robotics: Walking and Climbing Robots
{
A = Ai i = 1, 2, n } (15)
where Ai denotes the symbol of i th action and n denotes the number of actions. The action
Ai consists of the series of parameters to move the robot, such as
[
ǻzij = Δuzij Δdzij ]
where Δxi , Δyi and Δzi are the translational distance for each direction, and Δuzij denotes
the upward distance of j th leg when the j th leg becomes the swing leg from the support
leg. In addition, Δdzij denotes the downward distance of j th leg, when the j th leg
becomes the support leg from the swing leg.
The following subsections describe the core actions, which enable the robot to avoid obstacle
at different circumstances.
A1 = {c1} (18)
5. Action Selection
Autonomous intelligent systems are characterized by the fact that they select one from a set
of equivalent action alternatives in a given situation as appropriate (Habib, 2003b). Hence, it
is important to develop a navigation strategy with efficient action selection mechanism.
Currently, the authors have implemented a rule based logical flow to support the selection
of a suitable action according to perceived relation between the robot and the detected
obstacle. Brief listing of the rule based logical flow is shown below,
where L is the distance between the robot and the obstacle; H is the obstacle height, and
Z o is the obstacle depth.
118 Bioinspiration and Robotics: Walking and Climbing Robots
3000
Goal
2000
Obstacle
y [mm]
1000
Leg 1
Leg 2
0 Leg 3
Leg 4
Start
Center of gravity
–500 0
x [mm]
Figure 10. Experimental result of striding action
200
z [mm]
0
Start Goal
Leg 1
Obstacle
–200
0 1000 2000 3000
y [mm]
Figure 11. Tip of 1st leg in the experimental result of striding action
Selection of Obstacle Avoidance Behaviors
Based on Visual and Ultrasonic Sensors for Quadruped Robots 119
6. Experimental Results
Experiments have been conducted to prove that the designed set of action modules enables
the robot to recognize and avoid obstacles in real-time under different situations. The
selected gait of the robot during the experiments was an intermittent crawl gait. In addition,
a unit cycle has been used to illustrate the total time required to perform each action. A unit
cycle represents the time required for moving each of the four legs of the robot once
according to the pattern of the selected gait. However, the total number of cycles depends
on the environment and the type of the available obstacles. The following subsections
highlight the experimental results and achievements.
A = {A1 A1 A1 A1 A1 A2 A1 A1 A1 A1}
The results obtained through this experiment illustrate the ability of the robot to perform the
striding action successfully. Figure 10 shows the tips of left side legs of the robot, i.e., the 1st
and 3rd legs, didn't have any contact with the obstacle during the avoidance. In addition,
Figure 11 shows the z positions for the tip of the 1st leg. The time performance for executing
the set of actions above as illustrated by Figure 11 is shown below,
Action A1 is performed with 1 cycle, and the total number of A1 action as illustrated in this
behavior is 9;
Action A2 is performed with 6 cycles; and
Thus, the total number of cycles is 5 + 6 + 4 = 15 cycles.
A = {A1 A1 A1 A1 A3 A1 A1}
The robot has performed the climbing-over action successfully. The experimental results are
illustrated in Figure 12, in which it also highlights the case where the tips of left side legs of
the robot didn't have any contact with the obstacle at anytime during swing phase.
Figure 13 shows the z positions for the tip of the 3rd leg. The results illustrate a contact point
between the obstacle and the tip of the robot leg during a support phase while climbing-
over. The time performance for executing the set of actions above as illustrated by Figure 13
is shown below,
Action A1 is performed with 1 cycle, and the total number of A1 action as illustrated in this
behavior is 6;
Action A3 is performed with 10 cycles; and
Thus, the total number of cycles is 4 + 10 + 2 = 16 cycles.
120 Bioinspiration and Robotics: Walking and Climbing Robots
3000
Goal
Obstacle
2000
y [mm]
1000
Leg 1
Leg 2
0 Leg 3
Leg 4 Start
Center of gravity
–500 0
x [mm]
Figure 12. Experimental result of climbing-over action
200
z [mm]
0
Start Goal
Leg 3
Obstacle
–200
0 1000 2000 3000
y [mm]
Figure 13. Tip of 3rd leg with a climbing-over action
Selection of Obstacle Avoidance Behaviors
Based on Visual and Ultrasonic Sensors for Quadruped Robots 121
3000 Goal
Obstacle
2000
y [mm]
1000
Leg 1
Leg 2
0 Leg 3
Start Leg 4
Center of gravity
–1000
–400 0 400 800
x [mm]
Figure 14. Experimental result of detour action
Center of gravity
Goal
Leg1
4000
Leg2
Leg3
Leg4
y [mm]
2000
Obstacles
0
Start
A = {A1 A1 A1 A1 A1 A4 A1 A1 A1}
400
200
z [mm]
0
Start Goal
Leg2
Obstacle
–200
0 2000 4000
y [mm]
Figure 16. Tip of 2nd leg with a detour and striding action
The experimental result of a detour action is shown in Fig. 14. The results show none of the
robot’s legs tips did have any contact with the obstacle during the avoidance. The time
performance for executing the set of actions as stated above is,
Action A1 is performed with 1 cycle, and the total number of A1 action as illustrated in this
behavior is 8;
Action A4 is performed with 15 cycles; and
Thus, the total number of cycles is 5 + 15 + 3 = 23 cycles.
A = {A1 A1 A4 A1 A1 A1 A1 A1 A2 A1 A1}
During this behavior, the robot approaches the first obstacle with action A1 . Then, the robot
initiates the avoidance of the first obstacle using action A4 . After clearing the first obstacle,
and while the robot approaches the second obstacle using a number of A1 actions, the robot
Selection of Obstacle Avoidance Behaviors
Based on Visual and Ultrasonic Sensors for Quadruped Robots 123
selects to avoid it by activating the action A2 . The time performance for executing the set of
actions above as illustrated by Figure 16 is shown below,
Action A1 is performed with 1 cycle, and the total number of A1 action as illustrated in this
behavior is 9;
Action A4 is performed with 16 cycles; and
Action A2 is performed with 7 cycles.
Thus, the total number of cycles is 2 + 16 + 5+ 7 + 2 = 32 cycles.
6000
Goal
Center of gravity
Leg1
4000 Leg2
Leg3
y [mm]
Leg4
Obstacles
2000
0
Start
Action A1 is performed with 1 cycle, and the total number of A1 action as illustrated in this
behavior is 12;
Action A4 is performed with 19 cycles; and
Action A3 is performed with 9 cycles.
Thus, the total number of cycles is 4 + 19 + 5+ 9 + 3 = 40 cycles.
400
200
z [mm]
0
Start Goal
Leg1
Obstacle
–200
0 2500 5000
y [mm]
Figure 18. Tip of 1st leg in the experimental result of detour and climbing-over actions
7. Conclusions
This chapter presented a robust approach to the design of a set of behavioral actions and the
use of a combination of these actions to formulate different high level behaviors for
quadruped robots. It then, enabled the robot to select the suitable behavior in real-time to
avoid obstacles based on sensory information through visual and ultrasonic sensors. The
developed approach was successfully tested to facilitate the navigation in real
environments.
8. References
Arikawa, K. & Hirose, S. (1996). Development of Quadruped Walking Robot TITAN-VIII,
Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS'96), Vol. 1, pp. 208-214, 1996.
Chen, X., Watanabe, K., Kiguchi, K., Izumi, K. (2002a). Path Tracking Based on Closed-Loop
Control for a Quadruped Robot in a Cluttered Environment. Transactions of ASME,
Vol. 24, pp. 272-280, June 2002.
Chen, X., Watanabe, K., Kiguchi, K., Izumi, K. (2002b). An ART-Based Fuzzy Controller for
the Adaptive Navigation of a Quadruped Robot . IEEE/ASME Transactions on
Mechatronics, Vol. 7, No. 3, pp. 318-328, Sept. 2002.
Cruse, H., Bartling, Ch., Cymbalyuk, G., Dean, J. and Dreifert, M. (1994). A neural net
controller for a six Legged walking system. Proc. of International Conference from
Perception to Action, pp. 55-65, 1994.
Ferrell, C. (1994). Robust and Adaptive Locomotion of an Autonomous Hexapod. Proc. of
International Conference from Perception to Action, pp. 66-77, 1994.
Go, Y., Yin, X., Bowling, A. (2006). Navigability of multi-Legged Robots. IEEE/ASME
Transactions on Mechatronics, Vol. 11, No. 1, pp. 1-8, 2006.
Habib, M. K. (2003a). URUK: an Autonomous Legged Mobile Robot Design and
Implementation. Proc. of International Conference on Mechatronics (ICOM2003), June
2003, UK, pp 497-504.
Habib, M. K. (2003b). Behavior-Based Autonomous Robotic Systems: The Reliability of
Robot’s Decision and The Challenge of Action Selection Mechanisms. Proc. of
International Symposium on Artificial Life and Robotics (AROB' 2003), Jan. 2003, Oita,
Japan, pp.4-9.
Hirose, S., Masui, T., Kikuchi, H., Fukuda, Y., Umetani, Y. (1986). TITAN III: A Quadruped
Walking Vehicle. Proc. of International Conference on Robotics Research, pp. 325-331,
1986.
Hirose, S., Arikawa, K. (2000): Coupled and Decoupled Actuation of Robotic Mechanisms,
Proc. International Conference on Advanced Robots ICRA’2000, San Francisco, pp.33-39,
2000.
Hirose, S. (2001). Super Mechano-System Project in Tokyo Institute of Technology, Proc. of
2001 Int. Workshop on Bio-Robotics and Teleoperation, pp. 7-12, 2001
Lewis, M. A. (2002). Detecting Surface Features during Locomotion using Optic Flow. Proc.
of International Conference of Robotics and Automations (ICRA’2002), Washington DC,
pp. 305 – 310, 2002.
Nishikawa, N., Murakami, T., Ohnishi, K. (1998). An Approach to Stable Motion Control of
Biped Robot with Unknown Load by Torque Estimator. Proc. of International
Workshop on Advanced Motion Control, pp. 82-87, 1998.
Ohya, A., Kosaka, A., Kak, A. (1997). Vision-Based Navigation of Mobile Robot with
Obstacle Avoidance by Single Camera Vision and Ultrasonic Sensing, Proc. of the
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'97), Vol. 2,
pp. 704-711, 1997
Okada, K., Kagami, S., Inaba, M., Inoue, H. (1999). Vision-based Action Control of
Quadruped Legged Robot JROB-1, Proc. of 9th International Conference on Advanced
Robotics (ICAR'99), pp. 451-456, 1999
126 Bioinspiration and Robotics: Walking and Climbing Robots
Okada, K., Inaba, M., Inoue, H. (2003). Integration of Real-time Binocular Stereo Vision and
Whole Body Information for Dynamic Walking Navigation of Humanoid Robot,
Proc. of International Conference on Multisensor Fusion and Integration for Intelligent
Systems (MFI'03), pp. 131-136, 2003
Ooka, A., Ogi, K., Takemoto, Y., Okamoto, K. and Yoshida (1986). Intelligent robot system II.
Proc. of International Conference on Robotics Research, pp.341-347, 1986..
Pugh, D. R., Ribble, E. A., Vohnout, V. J., Bihari, E. E., Walliser, T. M., Patterson, M. R.,
Waldron, K. J. (1990). Technical Description of the Adaptive Suspension Vehicle.
International Journal of Robotics Research, Vol. 9, No. 2, pp. 24-42, 1990.
Raibert, M. H. (1986). Legged robots that balance. The MIT Press, 1986.
Tsukakoshi, H., Hirose, S., Yoneda, K. (1996). Maneuvering Operations of the Quadruped
Walking Robot on the Slope, Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and
Systems, Vol. 2, pp. 863-869, 1996
Waldron, K. J., Vohnout, V. J., Pery, A., McGhee, R. B. (1984) . Configuration of the Adaptive
Suspension Vehicle. International Journal of Robotics Research, Vol. 3, No. 2, pp. 37-48,
1984.
Yamaguchi, T., Watanabe, K., Izumi, K., Kiguchi, K. (2002a). Acquisition of An Obstacle
Avoiding Path in Quadruped Robots, Proc. of Joint 1st International Conference on
Soft Computing and Intelligent Systems and 4th International Symposium on
Advanced Intelligent Systems(SCIS & ISIS 2002), 2002.
Yamaguchi, T. , Watanabe, K., Kiguchi, K., Izumi, K. (2002b). Obstacle Avoidance Strategy
for A 4-legged Robot by Getting-over and Striding, Proc. of the 4th Asian Control
Conference, pp. 1438-1443, 2002.