Unit 5 - Rehab Robotics & ECS W
Unit 5 - Rehab Robotics & ECS W
REHABILITATION ROBOTICS
5.1 INTRODUCTION
The use of robots in rehabilitation is quite different from industrial applications where robots
normally operate in a structured environment with predefined tasks and are usually separated
from human operators. Many tasks in rehabilitation robots cannot be preprogrammed, for
example, pick up a newspaper or open a door. Furthermore, industrial robots are operated
with specially trained personnel with a certain interest in the technology, whereas
rehabilitation robots are usually used by people who have physical limitations. Rehabilitation
robots have more in common with service robots which integrate humans and robots in the
same task, requiring certain safety aspects and special attention to man–machine interfaces.
Therefore, more attention must be paid to the user’s requirements, as the user is a part of the
process in the execution of various tasks.
History
Most of the work in specific areas of rehabilitation robotics started in mid-1970's. Earlier
works are as follows: CASE manipulator (1960's), Rancho Los Amigos manipulator, Work--
station- based system (Germany, mid-1970's), robot arm mounted to a wheel--chair (New
York, 1970's), fixed site robots (1980's) like DeVAR (Desktop Vocational Assistive Robots),
powered feeding devices like winsford feeder, mobile assistive robots like manus wheel--
chair mounted manipulators (since 1990's), and MoVAR (Mobile Vocational Assistive
Robots).
The first referenced rehabilitation manipulator was the CASE manipulator, built in 1960s.
This was a powered orthosis with four degrees of freedom, which could move the user’s
paralyzed arm. Another early powered orthosis was the Rancho Los Amigos Manipulator
with seven degrees of freedom.
Unit V
Work in the more specific area of rehabilitation robotics started in mid 1970s. one of the
earliest projects was the workstation based system designed in Germany. It was a five
degrees of freedom manipulator and was placed in a specially adopted desktop environment,
using rotating shelf units.
General characteristics
Robot is a specialized machine tool with a degree of flexibility that distinguishes them from
fixed-purpose automation.
It is essentially a mechanical arm that is bolted to the floor, a machine, the ceiling or in some
cases the wall fitted with its mechanical hand, and taught to do repetitive task in a controlled,
ordered environment.
It has the ability to move the mechanical arm to perform the task. It interfaces with the work
environment once a mechanical hand has been attached to the Robot’s tool mounting plate.
The use of robots in rehabilitation occurred in 1960s, when a powered orthosis with four
degrees of freedom (DOF) was used to move the user’s paralyzed arm. There have been more
efforts on investigating specific areas of rehabilitation robotics in recent years, and the field is
technically well advanced. However, rehabilitation robotics has been recognized overall as a
technology playground for universities and academia, and it is penetrating the market very
slowly and is still considered to be a “future technology.” The major problem is the lack of
evidence for usability and benefits of rehabilitation robots. Although some evaluations and
studies have been undertaken, the real benefits and disadvantages of systems in service need
to be analysed to better understand who the users are and what they actually need.
Manipulator
End effector (which is the part of the manipulator)
Power supply
Controller
The manipulator, which is the robot’s arm, consists of segments joined together with axes
capable of motion in various directions allowing the robot to perform the task.
The end effector is a gripper tool, a special device, or fixture attached to the robot’s arm,
actually perform the task.
Power supply provides and regulates the energy that is converted to motion by the robot
actuator, and it may be electric, pneumatic or hydraulic.
1. Pneumatic power (low-pressure air) is used generally for low weight carrying robots.
2. Hydraulic power transmission (high-pressure oil) is usually used for medium to high
force or weight applications, or where smoother motion control can be achieved than
with pneumatics. Consideration should be given to potential hazards of fires from
leaks if petroleum-based oils are used.
3. Electrically powered robots are the most prevalent in industry. Either AC or DC
electrical power is used to supply energy to electromechanical motor-driven actuating
mechanisms and their respective control systems. Motion control is much better, and
in an emergency an electrically powered robot can be stopped or powered down more
safely and faster than those with either pneumatic or hydraulic power.
The controller initiates, terminates and coordinates the motion of sequences of a robot. Also,
it accepts the necessary inputs to the robot and provides the outputs to interface with the
outside world. Either auxiliary computers or embedded microprocessors are used for
practically all control of industrial robots today. These perform all of the required
computational functions as well as interface with and control associated sensors, grippers,
tooling, and other associated peripheral equipment. The control system performs the
necessary sequencing and memory functions for on-line sensing, branching, and integration
of other equipment. Programming of the controllers can be done on-line or at remote off-line
control stations with electronic data transfer of programs by cassette, floppy disc, or
telephone modem.
ROBOTIC MANIPULATOR
A robotic manipulator is a mechanical unit that provides movement similar to that of the
human arm. Its primary function is to provide the specific motions that will enable the tooling
at the end of the arm to do the required work. The applications were originally for dealing
with radioactive or bio hazardous material, and they were used in inaccessible places. In more
recent developments they have been used in applications such as robotically-assisted surgery
and in space.
Robotic manipulators can be divided into two sections, each with a different function:
Arm and Body - The arm and body of a robot are used to move and position parts or
tools within a work envelope. They are formed from three joints connected by large
links.
Wrist - The wrist is used to orient the parts or tools at the work location. It consists of
two or three compact joints.
The parts of the manipulator are shown in Fig. 2. Robot manipulators are created from a
sequence of link and joint combinations. The links are the rigid members connecting the
joints, or axes. The axes are the movable components of the robotic manipulator that cause
relative
motion between adjoining links. The mechanical joints used to construct the manipulator
consist of five principal types. Two of the joints are linear, in which the relative motion
between adjacent links is non-rotational, and three are rotary types, in which the relative
motion involves rotation between links.
The individual joint motions associated with arm, body and wrist are referred to as degree of
freedom. Each axis is equal to one degree of freedom. Typically, robots are equipped with 4 –
6 degrees of freedom. The wrist can reach a point in space with specific orientation by any
one three motions: up – and – down motion (Pitch), side – to –side motion (Yaw) and rotating
motion (Roll). The points that manipulator bends, slides or rotates are called joint or position
axes. Manipulation is done by mechanical devices, such as linkages, gears, actuators and
feedback devices. The x-axis travel moves the manipulator in-and-out of motion. The y-axis
travel causes the manipulator to move side-to-side. The z-axis travel causes the manipulator
to move up-and-down. The mechanical design of the manipulator relates directly to its work
envelope and motion characteristics.
The end effector is the device that is mechanically opened or closed. Depending on the type
of operation, conventional end effectors are equipped with various devices and tool
attachments like grippers, scoops, hooks, electromagnets and power tools. End effectors are
generally custom made to meet specific handling requirements. Mechanical grippers are most
commonly used and are equipped with two or more fingers.
Fig.2 Parts of a manipulator. The robot manipulator has a body, arm and wrist. Names match
those of the human arm.
Arms are typically defined by fourteen different parameters.
Number of Axes – Two axes are needed to reach any point in a plane. Three are required to
reach a point in space. Roll, pitch, and yaw control are required for full control of the end
manipulator.
Degrees of Freedom – Number of points a robot can be directionally controlled around. A
human arm has seven degrees; articulated arms typically have up to 6.
Working Envelope – Region of space a robot can encompass.
Working Space – The region in space a robot can fully interact with.
Kinematics – Arrangement and types of joints (Cartesian, Cylindrical, Spherical, SCARA,
Articulated, Parallel)
Payload – Amount that can be lifted and carried
Speed – May be defined by individual or total angular or linear movement speed
Acceleration – Limits maximum speed over short distances. Acceleration is given in terms of
each degree of freedom or by axis.
Accuracy – Given as a best case with modifiers based upon movement speed and position
from optimal within the envelope.
Repeatability – More closely related to precision than accuracy. Robots with a low
repeatability factor and high accuracy often need only to be recalibrated.
Motion Control – For certain applications, arms may only need to move to certain points in
the working space. They may also need to interact with all possible points.
Power Source – Electric motors or hydraulics are typically used, though new methods are
emerging and being tested.
Drive – Motors may be hooked directly to segments for direct drive. They may also be
attached via gears or in a harmonic drive system
Compliance – Measure of the distance or angle a robot joint will move under a force.
Gantry or Cartesian or linear- These robots have linear joints and are mounted
overhead and allow movement across a horizontal plane. They are usually large
systems that perform pick and place applications. They are also called Cartesian and
rectilinear robots.
Cylindrical – Robot’s axis form a cylindrical coordinate system. These are formed by
linear joints that connect to a rotary base joint. Used for assembly operations.
Polar – Its axes form a polar coordinate system. The base joint of a polar robot allows
for twisting and the joints are a combination of rotary and linear types. The work
space created by this configuration is spherical.
Jointed-Arm/Articulated - This is the most popular industrial robotic configuration.
This has three rotational axes connecting the three rigid links and a base. The arm
connects with a twisting joint, and the links within it are connected with rotary joints.
It is also called an articulated robot. It closely resembled the human arm.
SCARA (Selective Compliant Articulated Robot Arm) - Used for pick and place
work, application of sealant, assembly operations and handling machine tools. It's a
robot which has two parallel rotary joints to provide compliance in a plane.
Parallel - One use is a mobile platform handling cockpit flight simulators. It's a robot
whose arms have concurrent prismatic or rotary joints.
SCARA
Method of Control
Robots are classified by control method into servo and non-servo robots. The earliest robots
were non-servo robots. Non-servo robots do not have the feedback capability, and their axes
are controlled through a system of mechanical stops and limit switches. The non-servo robots
are often referred to as “end point” or “pick and place” or “limited sequence” robots. These
robots have limited flexibility in terms of program capacity and positioning capacity. These
robots are essentially open-loop devices whose movement is predetermined mechanical stops,
they are primarily useful for materials transfer. Characteristics of non-servo robots include:
relatively high speed is possible due to the small size of the manipulator, low cost, simple to
operate, program and maintain.
Servo control robots use feedback to provide precise start and stops for performing tasks.
Servo robots are controlled through the use of sensors that continually monitor the robot's
axes and associated components for position and velocity. This feedback is compared to
pretaught information which has been programmed and stored in the robot's memory. Servo
control robots are further classified according to the method that the controller uses to guide
the end effector. Three types of path are generated point-to-point, controlled path and
continuous path. The characteristics of servo robot include: smooth motions are executed,
with control of speed; maximum flexibility is provided by the ability of the program the axes
of the manipulator to any position within in the limits of travel, expensive.
Point-to-point - The simplest type of robot in this class is point-to-point robot. With point-to-
point programming, the robot moves from one discrete point to another within its working
envelope. During the point-to-point operation the robot moves to a position, which is
numerically defined and stops there. The end effector performs the desired task, while the
robot is halted. When task is completed, the robot moves to the next point and cycle is
repeated. In point-to-point systems robots path and speed during transition from point to
point, are not of particular importance, so these robots have axis position counters for control
of the final robot position. Such robots are taught a series of points and these points are stored
and played back. Point-to-point robots are severely limited in the range of operations.
Common applications include component insertion, spot welding, hole drilling, machine
loading and unloading, and assembly operations.
Continuous-path - In continuous-path robot the tool performs its task, while the robot is in
motion like in the case of arc welding, where the welding pistol is driven along the
programmed path. All axes of the continuous path robot move simultaneously, each with a
different speed.
The speeds are coordinated with the computer so that the required path is followed. The
robots path is controlled by storing a large number of close succession of spatial points in the
robot’s memory during the teach sequence. During teaching, and while the robot is being
moved, the coordinate points in space of each axes are continually monitored and placed into
the control system computer memory. These are the most advance system robots and require
the most sophisticated computer controllers and software developments. Continuous path
robots are used for arc welding, spray painting, cleaning of metal articles, complex assembly
processes and surveillance, etc.
All types of robots are used in diagnosis, therapy and patient care etc. from manual guided
diagnostics to complex surgical robot workstations. In surgery, robots are used for holding
instruments, tele-surgical functions, navigation, positioning, specific surgery operations like
cutting the bone.
Although the needs of many individuals with disabilities can be satisfied with traditional
mobility aids (e.g., canes, walkers, manual wheelchairs, power wheelchairs, scooters, etc.),
there exists a segment of the disabled community who find it difficult or impossible to use
traditional mobility aids independently. This population includes, but is not limited to,
individuals with low vision, visual field neglect, spasticity, tremors, or cognitive deficits.
Individuals in this population often lack independent mobility and are dependent on a
caregiver to push them in a manual wheelchair.
To accommodate this population, researchers have used technologies originally developed for
mobile robots to create intelligent mobility aids (IMAs). These devices typically consist of
either a traditional mobility aid to which a computer and a collection of sensors have been
added or a mobile robot base to which a seat and/or handlebars have been attached. IMAs
have been designed based on a variety of traditional mobility aids and provide navigation
assistance to the user in a number of different ways, e.g., assuring collision-free travel, aiding
the performance of specific tasks, and autonomously transporting the user between locations.
A useful way of classifying smart wheelchairs is based on the degree to which the
components of the smart wheelchair are integrated with the underlying mobility device. The
majority of smart wheelchairs that have been developed to date have been tightly integrated
with the underlying power wheelchair, requiring significant modifications to function
properly. .A smaller number of smart wheelchairs have been designed as “add-on” units that
can be attached and removed from the underlying power wheelchair.
An information flow diagram for a smart wheelchair is shown in fig. Critical components of a
smart wheelchair include sensors for identifying features of the environment, input methods
for the user to provide commands to the system, methods for the system to provide feedback
to the user, and a control algorithm that coordinates input from the user, the sensors, and the
powered wheel chair to determine the system’s behaviour.
Feedback
Input
method
Sensors
Environment
Wheelchair
Control
User
5.3.2 Sensors
To avoid obstacles, IMAs need sensors to perceive their surroundings. By far, the most
frequently used sensor is the ultrasonic acoustic rangefinder (i.e., sonar). Sonar sensors are
very accurate when the sound wave emitted by the sensor strikes an object head on. As the
angle of incidence increases, however, the likelihood that the sound wave will not reflect
back towards the sensor increases. This effect is more pronounced if the object is smooth or
sound absorbent. Sonar sensors are also susceptible to “cross talk,” in which the signal
generated by one sensor produces an echo that is received by a different sensor.
Another frequently used sensing modality is the infrared rangefinder (i.e., IR). IR sensors
emit light, rather than sound, and can therefore be fooled by dark (light absorbent) material
rather than sound absorbent material. IR sensors also have difficulty with transparent or
refractive surfaces. Despite their limitations, however, sonar and IR sensors are often used
because they are small, inexpensive, and well understood.
Neither sonar nor IR sensors are particularly well suited to identifying drop-offs, such as
stairs, curbs, or potholes. It is not uncommon for floors to be dark and smooth, meaning that
both a sonar and an IR sensor would need to be facing almost straight down towards the
ground in order to receive an echo. In this case, the IMA would not have enough warning
time to stop.
More accurate obstacle and drop-off detection is possible with laser rangefinders, which
provide a 180◦ scan within a two-dimensional plane of the distance to obstacles in the
environment. Examples of IMAs that use a laser rangefinder include PAM-AID, Rolland,
Maid. Unfortunately, laser rangefinders are expensive, large, and consume lots of power,
which make it difficult to mount enough rangefinders on an IMA to provide complete
coverage.
Another option is a “laser striper,” which consists of a laser emitter coupled with a CCD
camera. The image of the laser stripe returned by the camera can be used to calculate distance
to obstacles and drop-offs based on discontinuities in the stripe. The laser striper hardware is
less expensive than a laser rangefinder, but can return false readings when the stripe falls on
glass or a dark surface. To date, this system has not been used within an IMA.
Perhaps the most promising sensing modality is machine vision. Cameras are much smaller
than laser rangefinders and, thus, much easier to mount in multiple locations on an IMA.
Cameras can also provide much greater sensor coverage. The cost of machine vision
hardware has fallen significantly—what used to require special cameras and digitizing boards
(frame grabbers) can now be accomplished with a $20 USB Web cam — and machine vision
software continues to improve, making successful implementation of an IMA based on
computer vision increasingly likely. Some IMAs already use computer vision for landmark
detection (e.g., Rolland, MAid, and CPWNS) and as a means of head- and eye-tracking for
wheelchair control.
Sterovision Captures color or black- Low power Cannot detect textured objects
camera and-white image and Low cost Cannot perform in poorly lit
depth images of the Can be used for conditions
environment high-level scene Challenges posed by
understanding reflective and transparent
surfaces
Bump (tactile Can detect if contact is Low power Requires contact (safety
sensor) made by an object Low cost issues)
Feedback Modalities: Smart wheelchairs can provide feedback to the driver through various
modalities and interfaces. For example, speakers can be used to provide audio feedback,
while small LEDs can be used to offer visual feedback (such as flashing red light if the driver
is close to an object). Haptic feedback can be provided by using small vibrations to warn the
driver about obstacles. Touch screen displays can be used both to gather information from the
driver (ex. Desired destination) and to display information (ex. Scheduled activities, way-
finding assistance, virtual environment etc.)
Despite a long history of research in this area, there are very few smart wheelchairs currently
on the market. Two North American companies, AppliedAI and ActivMedia, sell smart
wheelchair prototypes for use by researchers, but neither system is intended for use outside of
a research lab. The CALLCentre smart wheelchair is sold in the UK and Europe by Smile
Rehab, Ltd. (Berkshire, UK), as the “Smart Wheelchair”. These chairs are adapted, computer-
controlled power wheelchairs which can be driven by a number of methods such as switches,
joysticks, laptop computers, and voice-output. The mechanical, electronic and software
design are modular to simplify the addition of new functions, reduce the cost of
individualized systems and create a modeless system. Since there are no modes and
behaviours are combined transparent to the user, an explicit subsystem called the Observer
was set up to report to the user what the system is doing. The Observer responds and reports
its perceptions to the user via a speech synthesizer or input device.
The software runs on multiple 80C552 processors communicating via an I2C serial link
monitoring the sensors and user commands. Objects or groups of objects form modules which
encapsulate specific functional tasks. It is multitasking with each object defined as a separate
task. The architecture of behaviours each performing a specific functional task is similar to
Brooks’ Subsumption Architecture.
Reasoning and Learning Laboratory, Quebec is working on SmartWheeler project. The goal
of the SmartWheeler project is to increase the autonomy and safety of individuals with severe
mobility impairments by developing a robotic wheelchair that is adapted to their needs. The
project tackles a range of challenging issues, focusing in particular on tasks pertaining to
human-robot interaction, and on robust control of the intelligent wheelchair. The platform we
have built also serves as a test-bed for validating novel concepts and algorithms for
automated decision making for socially assistive robots. The touch screen is used to give
commands to the wheelchair either via a map of the environment, or through a button based
interface. The laser range finders measure the distance to obstacles around the wheelchair, by
emitting invisible light in a horizontal plane and measuring the time it takes to reflect from
objects. The on-board joystick is a traditional method of control for a person sitting in the
wheelchair. The wireless joystick allows for remotely controlling the wheelchair from a
distance.
Fig. SmartWheeler
5.3.3 Smart Wheeled Walkers
There are several IMAs based on wheeled walkers (i.e., rollators) that are currently being
developed. The goal of these devices is to provide the basic support of a traditional rollator
coupled with the obstacle-avoiding capability of a mobile robot. Ideally, these devices
function as a normal rollator most of the time, but provide navigational and avoidance
assistance whenever necessary. An IMA that is currently making the transition from research
project to commercial product is the PAM-AID. PAM-AID stands for Personal Adaptive
Mobility Aid for the frail and elderly visually Impaired. The PAM-AID, which is marketed as
the Guido (Figure ), consists of a mobile robot base to which sonar sensors, a laser
rangefinder, and a pair of handles (oriented like bicycle handles) have been added. The
PAMAID was developed to assist elderly individuals who have both mobility and visual
impairments, and has two different control modes. In the manual mode, the user has complete
control over the walker. Voice messages describing landmarks and obstacles are given to the
user. In the automatic mode, the device uses the sensor information along with the user’s
input to negotiate a safe path around obstacles. The central processing unit controls motors
that can direct the front wheels of the walker away from obstacles.
5.3.4 Summary of Robotic Mobility Aids
There are several barriers that must be overcome before IMAs can become widely used. A
significant technical issue is the cost–accuracy trade-off that must be made with existing
sensors. Until there is an inexpensive sensor that can detect obstacles and drop-offs over a
wide range of operating conditions and surface materials, liability concerns will limit IMAs
to indoor environments. To date, only a few IMAs have made efforts at operating outdoors,
and most IMAs focus their efforts entirely on indoor environments.
Another technical issue is the lack of a standard communication protocol for wheelchair input
devices (e.g., joysticks and pneumatic switches) and wheelchair motor controllers. There
have been several efforts to develop a standard protocol, for example, multiple masters
multiple slave (M3S), but none has been adopted by the industry. A standard protocol would
greatly simplify the task of interfacing smart wheelchair technology with the underlying
wheelchair.
5.4 ROBOTIC MANIPULATION AID
Robotic manipulation aids provide people with disabilities the tools to perform activities of
daily living (ADL) and vocational support tasks that would otherwise require assistance from
others. Such a manipulation aid is usually under control of its operator by a joystick, keypad,
or voice or other input devices.
- should be adaptable, flexible, cost effective and able to fit the cognitive, perceptual
and motor skills of the human operator and interactive. It should operate in largely
unstructured environments requiring effective use of tactile sensation.
- Usability, usability includes easy learn ability, efficiency in use, remember ability and
lack of errors in operation.
- should have interfaces that are specially designed for disabled people in order to
enable them to control the system with the most natural and less tiring mean.
- should have manipulative capability sufficiently good to make it an alternative to
some classes of human assistance.
- should be reliable enough that the user and attendants will be encouraged to use it.
A robotic aid should have certain basic features. It must have one or more electromechanical
manipulators (arms) that can be moved in the environment and are capable of bringing the
end effector (hand) to any position and orientation within the prescribed working volume.
The end effector must have some tactile sensors that provide the user with the functional
information about object location and grasp quality. The arm and hand are controlled by one
or more computers to perform complex tasks upon receiving specific commands. The user
needs one or more input interfaces for complete system control. To have comprehensive
awareness of the robot’s action and knowledge, the user must have one or more feedback
interfaces (e.g. displays, voice output). The user must be in the control loop of the system.
The first class of interfaces are modified “usual” devices, such as keyboards, track balls and
joy sticks. These devices are transformed to be more ergonomic to fit the person’s disability.
The second class of interfaces represents the sense signals from the user to predict and
synthesise an adequate control command. These signals can be user’s eye movements, head
gestures, body movements and also EMG and EEG signals. In this kind of interface, a sensor
is the input device, the signal picked up by the sensor is processed in an adequate manner to
extract the control command.
Robotic manipulation aids can be applied primarily in
Three conditions must be met by a robotic aid before it can be economically feasible partial
substitute for human care. Its manipulative capability must be sufficiently good to make it an
alternative to some classes of human assistance. It should be reliable enough that the user and
attendants will be encouraged to use it. Savings derived from using the robotic aid must pay
for its initial and maintenance costs.
ARM is a wheel chair mounted manipulator. MANUS is an arm robot with 7 degrees of
freedom, which is mounted on an electrical wheelchair of severely disabled people who have
lost the use of their arm. It has seven degrees of freedom. It can be controlled by means of a
keyboard or a joystick, either via a direct interface or via the M3S bus. Two direct control
modes are implemented: Cartesian control and joint control. Only Cartesian control is used
by the user and joint control serves only for maintenance purposes. Semi-autonomous
motions are implemented for folding and unfolding the manipulator. The user must supply an
active input signal (like pressing a button) in order to activate the pre-programmed motions.
As soon as the user releases this button, the manipulator stops moving. This functionality is
implemented for safety reasons. The dimensions of the manipulator are reduced by placing
the motors and gear boxes and angle encoders in the shaft. A complicated mechanical
construction of toothed wheels, belts and hollow axes transfer the motions towards the end of
the manipulator, resulting in a slender construction. However, due to this construction
mechanical friction is very high, putting high demand on the joint controllers. Optical angle
encoders are used for controlling the position of the joints. Two control loops are used: a PI
speed loop (integrating action necessary for friction compensation) and a proportional
position loop. Besides that, an extra feed forward controller is implemented for compensating
the friction component.
ARM is frequently used at home for supporting activities of daily living. The following
activities can be performed:
- fetching objects
- picking up and placing books, manuals and papers at the table or in shelves
- eating and drinking
- preparing food using a microwave oven
- pouring a glass of water
Fig: ARM – making a phone call, having a drink
MY Spoon
My Spoon by SECOM Co., Ltd., Japan Five-DOF manipulator arm and 1-DOF end-effector
(spoon and fork) to assist eating. It can be operated in manual, semi-automatic mode via a
joystick. Maximum flexibility and control is obtained by fully controlling the spoon with the
joystick. By moving the joystick in all four directions (up, down, left, right), any food item
within the included tray can be eaten in any desired order.
Fig. My Spoon
ISAC (Intelligent Soft Arm Control)
ISAC is a service robot system for the physically disabled. This is used for feeding the users.
It has two SoftArm manipulators, a stereo color camera head, and an anthropomorphic 4-
fingered hand. ISAC hands are equipped with multi-fingered grippers that will allow the
robot to pick up a variety of objects. To pick up a spoon, for instance, ISAC employs
sensitive touch sensors that help it place the spoon between a thumb and three fingers. ISAC
is designed as a multi-agent system where a separate agent is devoted to each functional area.
For instance, one agent deals with arm movement, while another is devoted to interacting
with humans. Using DataBase Associative Memory (DBAM), ISAC has the ability to store
and structure the knowledge it acquires. To mimic long-term memory, DBAM uses a
spreading activation network to form associations between database records. To efficiently
structure its memories, ISAC uses a Sensory EgoSphere (SES) that processes incoming
perceptual data according to spatial and temporal significance.
HARIS
HARIS (fig. ) is a robotic arm and human interface used by disabled people move and fetch
objects. It consists of three arm segments and a hand. The arm has three joints and 8 degrees
of freedom. The hand has five fingers, 178 tactile sensors and 17 degrees of freedom. Each
sensor is connected to a 8-bit microcomputer chip installed within the hand unit, which
recognizes an object by detecting a pattern of contacts with sensory switches. Each finger is
able to detect slippage. The robotic system also includes a scene-understanding system, 3D-
vision system, a real-time motion-scheduling system, an arm-control system and a knowledge
architecture that allows it to capture and use information about its environment. To
accomplish this, the designers tried to model human ways of storing information,
communicating and scheduling actions. The designer has integrated this arm with
stereovision cameras that allow the robot to perceive several real-world objects such as cups
and trays. The robot can discern the color of objects, their position and whether cups are
empty or full. With the help of a natural language processing program, the robot can carry out
commands such as "Get the blue cup!"
Fig. Haris
Traditional human-assisted therapy for mobility impairments has been augmented with
therapeutic robots designed to administer different modes of therapy to the impaired limb.
These robots are haptic interfaces, meaning that they allow a patient to interact with a
computer by displacing or exerting force on the robot and by feeling forces exerted on them
by the robot (force feedback). Patients also usually receive visual information about their
performance from a computer monitor or another display. Robotic therapy for the upper limb
can focus on movements of the shoulder, elbow, wrist, and/or hand, and robotic therapy for
the low limb concentrates on restoration of normal gaits.
MIT-MANUS (Figure) is the most widely tested robot under therapeutic conditions. It is a 2-
DOF robotic arm that consists of a five-bar linkage SCARA (selective compliance assembly
robot arm). A stroke patient moves a cursor on a computer screen to a target by moving the
end point of the robotic arm in a horizontal plane. The robot is designed so that the patient
experiences minimum impedance when moving the end point of the arm. The force provided
by two brushless motors can be used to help the patient to complete the target reach if
necessary. This mode of operation is known as an active assisted mode. An extension to the
MIT-MANUS allows about 19 in. of vertical movement in addition to movement in the
horizontal plane. During trials of MIT-MANUS, 56 stroke patients received about 4 hours per
week of reaching practice with the robot in addition to a traditional human-assisted
rehabilitation program. After the four weeks, robot-treated patients showed greater gains in
muscle control and muscle strength in the shoulder and elbow as compared to control patients
who received standard rehabilitation with 1 to 2 h per week of exposure to the robot.
Fig. MIT-MANUS
The “Mirror-Image Motion Enabler” (MIME) is another haptic interface designed for
rehabilitation of the elbow and shoulder. The MIME system uses a 6-DOF industrial robot,
the PUMA-560 robot arm. The forearm of the impaired limb rests in a splint that prevents
movement of the wrist and hand; the splint is coupled to the robot arm via a six-axis force
sensor. Using the MIME system, a patient can make reaching movements to targets in one of
four modes. In the passive mode, the patient relaxes while the robot moves the arm, and the
active assisted mode is similar to that described earlier for the MIT-MANUS. In the active
constrained mode, the patient moves in the target direction against a viscous (velocity-
dependent) resistive force. The system uses springlike forces normal to the target direction to
encourage the patient to move only in the desired direction. The fourth mode, the bilateral
mode, is designed for hemiparetic patients or stroke patients who have one impaired limb and
one normal limb. In this mode, a position sensor is attached to the unimpaired limb, and the
system measures the movements that the patient makes with the unimpaired limb. The robot
is then used to guide the movements of the impaired arm so that its movements mirror those
of the normal arm.
Fig. MIME
ARMin
ARMin (Fig. 3) is a neurorehabilitation robot system used for arm and hand rehabilitation
therapy and specially for stroke, SCI and other neurological pathologies. ARMin is arm robot
with 6 active degrees of freedom, 3 in shoulder joint, 1 in elbow and 2 in wrist joint. It allows
wrist flexion/extension, pronation/supination, elbow flexion and extension and spatial
shoulder movements. It has 6 motors.and equipped with position and force sensors. The
patients hand is placed in an orthotic shell. This is designed primarily for rehabilitation of
incomplete tetraplegic and stroke patients.
Fig. 3 ARMin
HEXORR: Hand EXOskeleton Rehabilitation Robot
Hemiparetic (Fig. 4) individuals have great difficulty controlling wrist and hand movements.
These impairments can hinder one’s ability to perform many typical activities of daily life.
HEXORR is rehabilitation robot used for hand therapy. This robot will help patients increase
range of motion, grip strength and overall fine motor control of the hand. Many therapy
modes will be investigated to determine an optimal rehabilitation strategy. Potential therapy
methods include EMG control and force control. To enhance patient motivation and
participation, virtual reality therapy games will be incorporated into therapy sessions. Not
only will this robot be able to provide therapy, it will also serve as an assessment tool. Force
and motion sensors will track the progress of each patient's recovery throughout the therapy
session. This robot not only increases the hand therapy benefits, but also collects and
analyzes quantitative data to gain further incite into this debilitating impairment.
Fig. 4 HEXORR
LOKOMAT – Gait restoration rehabilitation robot
The Lokomat system (Hocoma AG, Volketswil, Switzerland), is the most developed
commercially available robot-assisted therapy system for lower limbs (fig. 5). It is a bilateral
robotic orthosis that incorporates computers, a treadmill, and a body-weight support system
to control the leg movements in the sagittal plane. The Lokomat’s hip and knee joints are
actuated by linear back-drivable actuators integrated into an exoskeletal structure. Passive
foot lifter assists ankle dorsiflexion during the swing phase. The legs of the patient, which are
fixed to the exoskeleton by straps, are moved according to a position control strategy with
predefined hip and knee joint trajectories. The built-in position and force sensors can help
assess physiological and neurological status of patients, providing objective assessment of the
patients in a repeatable manner and with minimal efforts from therapists. One of the
limitations of the Lokomat is that it only focuses on the leg movements and restricts pelvic
rotation, pelvic obliquity, and horizontal translation of the pelvis, which plays an important
role in normal gait.
Fig. Lokomat
Comparison Between Conventional and Robotic Therapy
Robotic therapy may improve the quality of upper limb rehabilitation both by addressing
current problems in therapy, such as the limited amount of time patients can participate in
rehabilitation, and by making new types of therapy possible. For stroke patients, previous
research has shown that intensive therapy for an impaired limb can improve the amount and
quality of use of the impaired limb in daily life, and patients can make improvements even
years after the stroke has occurred. Augmenting the limited number of human therapists with
robotic devices can allow more patients to receive sufficient therapy to reach optimal
outcomes. The use of robots in therapy could reduce the potential for therapist injury that
exists when therapists manually assist patients in weight-bearing activities such as walking.
Robotic devices could also enable rehabilitation to occur in the home with remote supervision
by a human therapist.
In addition, robots can facilitate new types of therapy not feasible for a human therapist to
provide. Robots offer a unique opportunity to minutely measure a patient’s movement,
allowing therapists to better evaluate an impairment and track changes in performance during
recovery. Robots can also provide precise force perturbation profiles to help patients relearn
specific target movements.
Reference
Degree of impairment: The direct link between a disabled person and his environment
offered by the environmental control system must be expanded to accommodate those
individuals with little or no controllable leg or hand function. Qudriplegics, for example,
must be interfaced with the system to enable them to benefit from it.
Modularity: Different kinds of input to control and operate the system and outputs to the
environment need to be flexible enough to add or subtract as usage dictates.
Cost: Medical care costs are usually very high for severely disabled people in relation to the
income. And, the rehabilitation product marketplace is one with low volume and high unit
cost. Most systems have to be customized for the capabilities and limitations of a specific
user, possibly doubling the price.
The first environmental control system, known as Possum (Patient Operated Selector
Mechanism), was produced in the 1950s for survivors of the polio epidemic. The early
systems, which were developed at Stoke Mandeville Hospital, allowed access to typewriters
and basic equipment. Use then spread to patients with other diseases, such as cerebral palsy.
The next 30 years brought some improvements in the design and range of devices that could
be operated with environmental control systems, and these improvements paralleled the
developments in household appliances. These control systems, however, were cumbersome.
Advances in technology, particularly within microelectronics, have since led to increasingly
sophisticated control systems, which now have the potential to operate communication aids
and wheelchairs as well as household equipment. Many environmental control systems
incorporate a remote control unit similar to that used with television sets, which is
unobtrusive and easy to mount on a wheelchair. Larger display units may still be required for
people with visual impairment and those with learning or perceptual difficulties who need to
use icons rather than words to operate the system. Additionally, some units provide auditory
cues to supplement images. ECU can be operated in various ways, from simple hand or foot
switches to more complex chin controls, suck- puff controls, and even controls governed by
eye movements.
The most common equipment that can be controlled by environmental control systems are
Door (lock and release), telephone, Alarm, Room lights, Television, radio, Curtains,
Electrically controlled bed (for different positions), Page turner, Mains 13amp power points,
Computer door intercom. The system can be operated in various ways, from simple hand or
foot switches to more complex chin controls, suck- puff controls, and even controls governed
by eye movements. Environmental controls can have a major impact on the lives of severely
disabled people, not only by enhancing their independence but also by reducing the stress and
workload of their caretakers.
Some people—such as those with motor neuron disease—will usually be given priority at an
early stage because of their poor prognosis. People with Parkinson's disease may not benefit
from technical aids in the early stages but may find them useful later. Children can also be
provided with control systems. For children with advancing disease, such as Duchenne's
muscular dystrophy, equipment is installed to mitigate some of the functional effects of
disease progression. People with stable conditions—such as cerebral palsy—can use
environmental control systems to access their home environment more effectively. This
encourages an independent lifestyle and promotes a sense of achievement and self-reliance.
Control systems have even been installed in some university halls of residence to enable
disabled students to take advantage of higher education.
ECUs can control many types of devices using many different signals. ECUs can control devices that
use IR (Infrared), RF (radio frequency), X-10 (over electrical wiring), and Ultrasound signals.
1. X10 Technology
X-10 is a networking system that allows the environmental control unit to send to talk to
other devices such as lights and appliances. Since it uses the existing house electrical wiring
it is very easy to set up and install. No additional networking cables need to be run and the
walls of the house don't have to be torn up. Items that are going to be controlled on the X-10
network
simply need to be plugged into existing electrical outlets. In some cases, special X-10
switches replace the standard electrical switches but the job is very minor. Each device on the
X-10 network is given a unique identifying address. That ensures the environmental control
unit send the signal to only one specific device. Lights and small appliances are typically
controlled using X-10. Sometimes hospital beds, thermostats, and door openers are also
controlled via X-10. The possibilities are pretty broad.
X10 is a communications protocol, similar to network protocols such as TCP/IP. X10 works
across home power lines and is extremely low-bandwidth. It is not necessary to run new
wiring within your home to control a device; the technology uses existing home's AC wiring.
X10 devices send about one command per second, and the commands are as simple as
"Device A1: turn on." These commands require less than 1/1000th of the bandwidth of a dial-
up connection. Like a broadcast network, every command is sent through every wire in your
house; it's up to each individual device to decide whether it needs to respond to a particular
command. With X10, devices that can be plugged into the wall can communicate with each
other and with the computer. The simplest way to use X10 is to turn a lamp on and off from
an X10 remote. To do this, configure an X10 transmitter (such as an X10 remote or wall
switch) and an X10 receiver (such as a lamp module) to the same X10 address. Plug a lamp
into the X10 receiver and then turn the lamp on. Now, X10 remote is used to turn the lamp on
and off.
2. Infrared IR Technology
Components of ECU
1. input devices (the user interface that accepts inputs from the user),
2. Control Unit (sends signals to control the electronic devices),
3. outputs devices (the devices to be controlled)
Telephone
Television
Control Unit
Lights
Nurse Call
The input device is the part of the ECU that the user interacts with to effect a change. The
signal from the input device travels to the processor which determines what action the user
wanted to perform on the output devices and then sends the corresponding signal to that
device. Other than regular TV remote control style input devices, there are alternative input
devices that are called switches. These alternative input devices are used by persons with
disabilities where the users are unable to use typical multi-button controllers. There are two
types of switches, single switches and dual switches. A single switch is a switch that only has
a single action or button. A dual switch has two input actions, or two buttons. For instance, in
a sip- and-puff switch the user has two types (dual-switch) of inputs that can be chosen,
sipping and puffing on a straw which could correlate to moving a menu up or down. A single
switch, by contrast in the same menu example, like a button would only be able to advance a
menu in one direction. A switch or controller can be used to either directly interact with an
electronic device or it can be used to communicate with the processor, which will then
communicate with the specified electronic device. Similarly, a switch can be installed on a
device that will allow a user to turn it on or off with the installed single or dual switch.
1.Switch control - the user activates an accessible switch to control the environmental
control unit. This is done through menu scanning, the menu items are presented to the
physically disabled user one at a time and the user times his selection to when the feature
he/she wants is presented. There can be submenus as well to allow more features specific to a
particular menu
item. The menus are presented either visually on a screen or through a speaker so the user can
hear the choice. The first time the user activates the accessible switch, the unit will scan
through the available choices in menu format such as "lights, television, telephone, ..." When
the user sees on the display screen or hears the menu choice, the switch is activated again to
cause the environmental control unit to perform that action or in turn present another menu of
choices to the user.
The Sip/Puff switch is ideal for people who have limited or no motor capability to operate
switch activated devices, including computers and environmental control systems. Sip-and-
Puff Switches are activated by the user’s breath. For example, a “puff” might generate the
equivalent of a keystroke or the click of a mouse. A sustained “sip” might be the equivalent
of holding a key down. With an on-screen keyboard the user puffs out the letters while they
can navigate the screen by sustained sipping.
Switch control is by far much more reliable than voice control. The main problem with
switch control is that the menus are presented one at a time, so the process of picking the
action is slow. However, switch control is a much more reliable control method.
2. Voice control - the user speaks the commands to the environmental control unit "Turn on
bedroom light." Voice is a much faster method of controlling an ECU.
Each manufacturer typically has models available for switch, voice, or switch and voice
control. The advantage to a unit that does both is that a user can use the faster voice
recognition method on most days but when the person is sick or their voice changes
throughout the day, the switch can be used.
The control unit gives the user options of what actions can be performed by using a switch.
When using switches, the ECU will display a list of available options to the user usually
through a menu-based display. One type of ECU is called a scanning ECU which
automatically cycles through the available options in the menu. Scanning ECUs are typically
used when the input device is a single switch because the user can only push a single button
therefore the scanning ECU cycles through the options, giving the user enough time between
options to select the command. There are also ECUs that work with dual switch inputs that do
not scan but will cycle based on one of the 2 input methods of the switch. For example,
consider a system that uses a single-switch input device working with the Mini-Relax ECU
(Figure 2). The Mini-Relax ECU, used for controlling a television, has 6 options: On/Off,
Channel Up,
Channel Down, Volume Up, Volume Down, and X-10 (the X-10 option is a feature that was
added to this ECU as an afterthought). The X-10 feature allows the device to act as a
universal controller for X-10, similar to how it acts as a universal controller for TVs. The
Mini-Relax cycles through the options by illuminating a LED to the left of option on the
ECU. When the desired option’s LED is lit, the user will activate their switch and the Mini-
Relax will send a signal to the corresponding device.
The output devices are the elements of the environment that the user wishes to control, such
as televisions, radios, thermostats, etc. Some of these sorts of devices already have remote
control functionality built into them, and they are able to decode commands the remote or
other controls are sending them. If the device is controlled by IR such as a TV, the ECU can
send an IR signal that will tell it which action to take. For environmental elements, such as
fans, that were not designed to be controlled remotely, X-10 technology is used.
Figure 2: Mini-Relax with X-10 Environmental Control Unit, used to control televisions and
X-10 modules.
1. Stand-alone ECU
2. Computer-based ECU
A stand-alone ECU is one that has all required programming and processing built-in so that
it doesn’t need a computer. The Mini-Relax (Fig. 2) is an example of a stand-alone ECU
because the input device attaches directly to the Mini-Relax and based on user input the Mini-
Relax will send out an appropriate signal. The computer-based ECUs have software that
typically allow the users more freedom. The freedom that computer-based ECUs grant is the
ability to create complex functions that can be used to control multiple devices at the same
time from the same window or to make macros that perform pre-set functions.
Computer based ECU’s are typically much less expensive on their own but require
additional hardware to interface between the computer and ECU modules. Some examples of
computer- based ECUs are ActiveHome and PowerHome. Both of these ECUs are software
but require a USB computer interface module to transmit the X-10 signal into the electrical
wiring to the network of devices being controlled. Stand-alone ECUs are considerably easier
to set-up and do not need to rely on the computer they are running on to function, but they
tend to be less cost-effective, if the user has a large network of different devices.
Angel FX ECU
Angel FX is an environmental control system that enables individuals, afflicted with severe
disabilities, to control Lighting, Heating & Air Conditioning, Communications, Audio /
Video Equipment, Bed Control, Nurse Call Systems, and many other devices within their
home, office, or other facility. Powerful communications features empower individuals who
are seeking to return to work.
The main unit is approximately 1 foot tall, and is designed to be placed on a desktop, or
similar surface. Remote control of the system from a wheelchair, or other location is possible
as an option. The unit offers both visual feedback, and high fidelity audible feedback to the
user allowing easy system navigation. Menu's consist of icons and text, and are customized to
the needs of the user when the system is first installed. Simple control scenarios as well as
complex control scenarios can be accommodated. An arsenal of intelligently designed
features truly enhances the lives and abilities of the user, providing maximum independence.
Angel FX (Fig. 3) is a switch activated system, and can operate in either of two modes, Scan
Mode (which requires only a single switch input signal), or Direct Drive Mode (which
requires two switch input signals). A wide variety of input switch devices can be connected to
provide the interface to the user. A built-in Sip & Puff switch is also available as an option.
Fig. 3 Angel ECU
ActiveHome
- Request help from bed at night with a pleasant remote, plug-in chime
- Securely open front door or garage door from bed when a care attendant arrives without
giving out keys or codes
- Call, answer, name dial, digit dial, and more with landline phone or cell phone
- Stay comfortable controlling your thermostat, space heater, air-conditioner, ceiling fan,
heating pad, electric blanket, fans/ionizer, and more
- Advanced programmable motion control of front entry, garage, & interior lighting and
doors!
- Dim Lights when getting up in morning, going to bed, or watching TV
- Raise or lower your Power Adjustable Bed by voice control!
- Open or close windows, blinds, and drapes
evoassist
evoassist is an innovative product that turns an iPhone, iPod touch or iPad into a universal
environmental control device. Flexible and easy to use, either via touch screen or input
switches, evoassist can be configured to best suit your needs, whether physically disabled or
able bodied.
Key features: