Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views9 pages

Computer 7

This paper presents a dynamic input mapping framework designed to enhance the usability of shared autonomy in assistive robotics by intuitively linking joystick movements to robot motions. Through user studies, the framework demonstrated reduced workload and improved usability compared to existing methods, particularly for individuals with disabilities. The approach builds on the accessible gaming community to ensure an intuitive control interface, showcasing its feasibility for users in daily living activities and creative tasks.

Uploaded by

bekendemissie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views9 pages

Computer 7

This paper presents a dynamic input mapping framework designed to enhance the usability of shared autonomy in assistive robotics by intuitively linking joystick movements to robot motions. Through user studies, the framework demonstrated reduced workload and improved usability compared to existing methods, particularly for individuals with disabilities. The approach builds on the accessible gaming community to ensure an intuitive control interface, showcasing its feasibility for users in daily living activities and creative tasks.

Uploaded by

bekendemissie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Giving Sense to Inputs: Toward an Accessible

Control Framework for Shared Autonomy


Shalutha Rajapakshe Jean-Marc Odobez Emmanuel Senft
Idiap Research Institute & EPFL Idiap Research Institute & EPFL Idiap Research Institute
Switzerland Switzerland Switzerland
[email protected] [email protected] [email protected]

Abstract—While shared autonomy offers significant poten-


arXiv:2501.16929v1 [cs.RO] 28 Jan 2025

tial for assistive robotics, key questions remain about how to


effectively map 2D control inputs to 6D robot motions. An
intuitive framework should allow users to input commands
effortlessly, with the robot responding as expected, without users
needing to anticipate the impact of their inputs. In this article,
we propose a dynamic input mapping framework that links
joystick movements to motions on control frames defined along a Disk at the
trajectory encoded with canal surfaces. We evaluate our method flip point

in a user study with 20 participants, demonstrating that our


input mapping framework reduces the workload and improves
usability compared to a baseline mapping with similar motion
encoding. To prepare for deployment in assistive scenarios, we
built on the development from the accessible gaming community Input
to select an accessible control interface. We then tested the system
in an exploratory study, where three wheelchair users controlled
the robot for both daily living activities and a creative painting
task, demonstrating its feasibility for users closer to our target Movement when pushed forward
Movement when pushed right
population.
Index Terms—Shared autonomy, human-robot interaction, as- Fig. 1: In this shared autonomy framework, after two demon-
sistive robotics, accessibility
strations, the motion is encoded in a canal surface. While
the robot autonomously navigates within the canal, users can
I. I NTRODUCTION
provide corrections with a gamepad, aligned to the correction
Assistive robotics can be a valuable tool for enhancing the axes on the canal’s cross sections to ensure intuitive mapping.
independence of individuals with disabilities. However, despite
significant technological advances in this field, the adoption of
baseline in a comparative study with three activities: pick-and-
assistive robots in environments inhabited by humans remains
place, painting, and laundry loading, showing greater usability
limited [1]. One key reason behind this lag in adoption is the
and lower workload than the baseline.
need for personalization of assistive robots. Each user has their
Finally, as our method is intended to be used by users with
own personal needs and preferences that need to be followed
disabilities, we took inspiration from the “accessible gaming”
to ensure acceptance. A promising approach to providing this
community to use an accessible gamepad designed for video
personalized assistance is shared autonomy (SA) [2]–[4], a
games and evaluated our approach in an exploratory study
control method that blends human and robot inputs. While
with three wheelchairs users who completed the same tasks
this method allows humans to maintain control of the robot’s
as in the comparative study, demonstrating the feasibility of
behavior, it can impose a high workload, as it typically relies
our approach for wheelchair users.
on 2D joysticks that users are familiar with (e.g., joysticks for
electric wheelchairs) to manage the 6 Cartesian dimensions Overall, our contributions are the following:
(3 for position and 3 for rotation) of the robots, which are • A dynamic mapping framework, developed through pilot
necessary to support various activities of daily living. studies, enabling an intuitive translation of user inputs
In this paper, we propose a novel input mapping frame- into the robot’s control space for aligned manipulation.
work for our already existing geometric SA paradigm, called • A comparative study with 20 participants comparing our
GeoSACS [5], that uses canal surfaces to encode robot motions method to the GeoSACS baseline, and demonstrating
[6]. We dedicated special effort to aligning user inputs from lower workload and higher usability of our system.
2D control interfaces with the current location in the canal to • An accessibility exploratory study with three wheelchair
ensure intuitive and accurate interpretation of user commands users, demonstrating the feasibility of this method for
(see Figure 1). We evaluated our approach against a GeoSACS users closer to our intended population.
II. R ELATED W ORK One challenge when controlling robots using the view frame
Our work is situated in the field of assistive robotics [7], is that, if the viewpoint is not aligned with the robot or the
which provides assistance through robotic agents to a diverse control frames of the method being used, users must put effort
range of people, including individuals with disabilities. This to predict how their inputs will affect the robot’s movements,
assistance enhances autonomy by enabling users to perform often leading to mentally demanding rotations [17], [24],
activities of daily living (ADLs) and become more independent [25]. Prior work has attempted to address user expectation
and expressive, while accommodating their unique needs and mismatches by leveraging additional devices, such as wearable
preferences. Within this domain, various robot control meth- technologies for extracting motion inputs [26], and haptic
ods exist, such as teleoperation (full manual control without sensors for feedback [27]. In contrast, Li et al. [28] presented
system assistance), shared autonomy (where the robot and a method that learns personalized human preferences offline to
user collaborate), and full autonomy (where the system has map user inputs to expected robot behaviors, without relying
complete control and the user has none) [8]. SA [2], [3], [9], on external sensors. We aim to develop a mapping framework
[10] has gained significant traction in recent years due to its that can operate online without the need for offline training,
balance of control: it allows users to retain authority while for practical reasons. Our system is designed to minimize the
enabling customization to their needs [11], all while offloading need for users to guess the impact of their inputs on the robot,
the majority of the task to the robot [4]. reducing the workload for controlling the robot.

A. Low-DoF to High-DoF Robot Control III. M OTIVATING E XAMPLE


One of the challenges of controlling robots in teleoperation To be useful, assistive robots need to be able to complete
and SA is the dimensionality gap [3], [10], [12], [13]: robots a large quantity of tasks, from helping their users to grab an
need to be controlled with six Cartesian dimensions, but item on the floor to helping with ADLs. Furthermore, such a
typical user interfaces such as joysticks already used for robot could also be used beyond chores, for example to help
electric wheelchairs only provide two degrees of freedom someone with severe mobility limitation to draw or paint. In
(DoF) control. Prior work exploring methods for reducing the all these situations, SA can help keep users in control while
dimensionality gap in robot teleoperation and SA has typically the robot reduces their workload. However, to be useful, two
relied on mode switching [14], [15] or predefined mappings main conditions need to be satisfied: (1) teaching new robot
[16]. These methods create open challenges, including the behaviors should be quick, and (2) the shared control paradigm
need for mental rotations [15], [17], the quality and quantity should be intuitive.
of data required to generate control spaces [3], and the lack In this paper, we use a geometric SA framework allowing
of autonomous behaviors [18], leading to increased workload. robots to learn behaviors from two demonstrations showing
Within SA frameworks, previous work has enabled humans the amplitude of the motion. For example, a caregiver could
to control robots through low-dimensional inputs [19], [20]. demonstrate a laundry loading behavior by manually moving
For example, Losey et al. [10] introduced a human-led SA the robot from a basket to the machine twice, once on the right
system that allows users to control a robot by navigating side and once on the left side. Then during behavior execution,
along captured 2D latent dimensions. Similarly, Hagenow et the robot executes a nominal behavior, corresponding roughly
al. presented a corrective SA method [4] in which users issue to the average trajectory, and the user can provide corrections
commands directly to three robot state variables using a 3- using their electric wheelchair joystick to address the vari-
DoF haptic device for robot controlling. GeoSACS [5] offers ability in the new environment and ensure task success, for
a geometric method for mapping 2D joystick inputs onto the example lifting the robot higher for longer garments. However,
disks of a canal [21], which represents the underlying structure to be efficient, the impact of these corrections should be
of a task, generated from just two demonstrations. While intuitive for the user, they should not have to guess what
GeoSACS present opportunities for more intuitive control, would happen if they move the joystick right or forward.
these canals can have intricate bends, causing inconsisten- Consequently, we need a consistent mapping between the user
cies in robot movements. Recognizing this, we build upon joystick and the robot corrective motions that would remain
GeoSACS to improve it and develop our mapping framework. transparent at each step in the task and regardless of the user
perspective.
B. Control Frames
Even with SA systems that reduce the dimensionality gap, IV. BACKGROUND WORK ON C ANAL S URFACES AND
intuitive control still requires user input mapping [22]. A key G EO SACS
design consideration in robot control is the frame of reference, We build our work on top of GeoSACS, which aims at
or control frame, from which users issue commands [23]. The tackling the dimensionality challenge in controlling high-DoF
literature identifies several types of control frames, such as the robots using low-DoF controllers. GeoSACS is based on canal
robot frame, view frame, and task frame, which can be used surfaces [6], a learning from demonstration approach capable
to control robots [17]. In this work, we develop our mapping of encoding robotic behaviors from only two demonstrations.
framework based on the user’s view frame with the intention These canal surfaces are composed of a series of 2D disks,
to enable effortless control of the robot. capable of representing various shapes tailored to different
tasks. Users can provide corrections to guide the robot’s (see Figure 2). When a user issues a directional command via
movement on these 2D disks while the robot navigates along the joystick, the system identifies the relevant disk and extracts
the canal, enabling effective 6D control using a 2D input. the corresponding correction axes (i.e., the correction x-axis
GeoSACS begins by processing two kinesthetic demon- xs and y-axis ys ). For near-horizontal disks, these correction
strations. This phase involves using dynamic time warping axes are projected onto the ground plane, and their alignments
(DTW) [29] to temporally align the demonstrated trajectories, with the joystick axes are calculated using the dot product
followed by a cubic spline-based step filter to further smooth values and their associated signs. For near-vertical disks, one
them. After processing, we generate a regular discretized correction axis is typically more vertical, aligned with the
curve, known as the directrix. At each point ds along the global z-axis, zG , while the other is typically more horizontal,
directrix, the radii of the disks, orthogonal to the tangent vector aligned with the ground plane. jy is aligned with the “vertical”
of the curve, are represented by the function r(s) ∈ R, with correction axis, so when the user pushes the joystick forward
s denoting the discrete state. After generating the canal, the (positive jy direction), the robot moves upward, and vice versa.
next step is to determine the correction axes on the canal’s The remaining correction axis is aligned with jx by projecting
disks. GeoSACS employs a global alignment approach using onto the ground plane and taking the dot product with jx .
spherical linear interpolation (Slerp) to calculate the correction
x-axis, xs on a disk. For the correction y-axis, ys , we use
a local alignment method, applying a windowing strategy to
refine the axis alignment. Within the generated canal and along
the correction axes, trajectory generation follows the ratio rule
[21]. While the robot autonomously navigates the canal, it
pauses its movement when a user issues a command and (a) Near-horizontal (b) Near-vertical disks (c) Near-vertical disks
adjusts along the correction axes on the disk orthogonal to disks facing the user facing sideways
its current direction. Once the user stops providing input, the
Fig. 2: Categories of disks found within a canal
robot resumes navigating the canal from the new point.
As shown in Figure 1, we argue that user inputs should be
aligned with the correction axes on the disks, as otherwise B. Pilot Study
misalignment can lead to control issues due to the varying To experiment with and gather feedback on our mapping
directions of the correction axes caused by the shape and bends method, we conducted a pilot study with three participants
in the generated canal. Additionally, the smoothness of the including three types of tasks: an object relocation task, a
generated canals in GeoSACS is insufficient, especially for laundry loading task, and a painting task (see Figure 3).
tasks like painting, which required special attention. The first two participants used only a single condition (initial
V. M ETHODOLOGY version of our approach) assessing and refining the method
for intuitiveness. We learned that for the near-vertical planes
The goal of our approach is to make control of the robot
facing sideways (see Figure 2c), users expected movement in
more intuitive. We propose to do it in three ways: (1) make the
one direction when the disk was on their left and the opposite
canals smoother to avoid natural backtracking when disks are
when on their right, with direction remaining consistent on
crossing, (2) increase the manipulability where users need it,
each side and only changing when switching sides. The
and (3) provide a more intuitive mapping between user inputs
third participant tested two conditions: the refined mapping
and joystick motions. Our code is available online1 .
framework based on feedback from the first two participants,
As robots are expected to interact in carpentered worlds (i.e.,
and the baseline condition, GeoSACS, allowing us to confirm
environments that have been engineered to have typically flat
the feasibility of a full study for the formal evaluation.
surfaces and right angles), we assume that users mostly need
The canals for the three task types were generated using
to interact with horizontal (e.g., table) and vertical (e.g., wall)
the pre-processing steps outlined in GeoSACS, with only
planes. As such, our method tries to increase manipulability in
two demonstrations. For the studies, we used the Lio robot,
such areas by ensuring that if the final canal sections are close
designed specifically for eldercare and home environments by
to a vertical or horizontal plane, they become aligned. Finally,
F&P Robotics [30], along with an Xbox gaming controller to
we initially assume that if a user pushes the joystick right,
capture user inputs.
the robot should move toward the “right” and if the joystick
is pushed forward, the robot should either go forward (if the C. Dynamic Input Mapping Framework
disk on the canal is near horizontal) or up (if the disk is near Building on insights from the pilot studies, we finalized
vertical). our input mapping strategy. The intuition is that for horizontal
A. Initial mapping disks, the “right” and “forward” joystick directions are aligned
with the robot motions; for vertical disks facing the user,
To align user inputs to corrections on a disk, we identify
“forward” is mapped to “up” for the robot, “right” is aligned;
two situations, near-horizontal disks and near vertical disks
and for sideways disks, “forward” is mapped to “up”, and
1 https://gitlab.idiap.ch/hrai/geosacs.git “right” will be mapped to “forward” if the disk is on the
Algorithm 1 Dynamic Input Mapping Based on this classification, we force the last part of the
1: Input: xs , ys , jx , jy , eT (s), zG , ds canal (set empirically to 20% in our experiments) to be
2: Output: Ax, Ay , Dx , Dy either vertical or horizontal. This guarantees that the robot
3:
(s)·zG
θ ← arccos |eeTT(s)||z moves through these regions along clean vertical or horizontal
G|
paths, simplifying user control and interpretability. Due to
4: if π3 < θ < 2π 3 then this forced alignment of the disks at the ends, intersections
5: disk orientation ← “near-horizontal” between adjacent disks may occur, that we minimize through
6: else optimization.
7: disk orientation ← “near-vertical”
2) Moving Out of Canals: During the pilot studies, par-
8: end if
ticipants also mentioned that it would be helpful if the robot
9: if disk orientation is “near-horizontal” then
could move slightly beyond its current range when needed.
10: px , py ← PROJECTION(xs , ys )
This issue was especially noticeable during the laundry task,
11: Ax , Dx ← (|jx · px | > |jx · py |) ? (xs , S IGN(jx · px )) :
where clothes would sometimes hang from the edge of the
(ys , S IGN(jx · py ))
laundry drum. When participants attempted to move the robot
12: Ay , Dy ← (|jy · px | > |jy · py |) ? (xs , S IGN(jy · px )) :
down to push the clothes further inside, it did not respond
(ys , S IGN(jy · py ))
as they expected. This limitation occurred because, during the
13: else if disk orientation is “near-vertical” then
demonstrations, it was difficult to demonstrate an exact margin
14: if |ys · zG | > |xs · zG | then
without risking a collision. To address this, we implemented
15: Ay ← ys , Ax ← xs , Dy ← S IGN(ys · zG )
a method that allows users to move partially outside the
16: else
generated canal. However, we limited this extension to prevent
17: Ay ← xs , Ax ← ys , Dy ← S IGN(xs · zG )
any safety issues that could arise from unrestricted movement
18: end if
beyond the canal. For safety purposes, after the correction is
19: px ← PROJECTION  (Ax)
p ·j applied, we gradually shrink the radius, ensuring that after a
20: θAx ← arccos |p x||jy | window of 10 disks (which in practice is less than 5cm), the
x y
21: if 5π π
6 > θAx > 6 then ▷ The disk is facing the user robot returns to moving along the canal’s boundary.
22: Dx ← SIGN(jx · px )
23: else ▷ The disk is facing sideways VI. C OMPARATIVE U SER S TUDY
24: Px , Py , Pz ← R⊤ · ds , R ← [jx jy (jx × jy )] Our first goal in this article is to evaluate the new mapping
25: Dx ← (Px < 0) ? SIGN(jy · px ) : −SIGN(jy · px ) system proposed here. To do so, we conducted a user study
26: end if with 20 abled participants recruited from the research institu-
27: end if tion, and mixing both technical and administrative staff. The
28: return Ax , Ay , Dx , Dy study aims to compare our method to a GeoSACS baseline.
However, it is important to note that since GeoSACS lacks a
method to refine the canal and ensure the disks are vertical as
user’s left side, and to “back” if the disk is on the right side. in our case, for the painting task, we had to manually adjust
For a more precise explanation, refer to Algorithm 1 with the the disk near the painting board to be perfectly vertical.
following notations: the disk where the correction is applied
A. Tasks
denoted as Cs , the correction axes on Cs as xs and ys , the
coordinates of the directrix point at the center of Cs in global Similarly to our pilot study, we used three different tasks
frame as ds , and the tangent vector to the directrix at that point to evaluate our method. Participants had up to eight minutes
as eT (s). Finally, the aligned correction axis and direction for to complete each task, or could stop when they considered
jx are denoted as Ax and Dx , and for jy , as Ay and Dy , with having finished the task.
a projection function indicating the projection to the ground. 1) Object Relocation Task: : In this task (see Figure 3 top
row), participants are asked to move three objects randomly
D. Other Improvements located from a table on one side of the robot to a location
1) Smoothing Canal Surfaces: To simplify manipulations marked on a printed sheet on a second table on the other
on horizontal and vertical planes at the extremities of the side of the robot. The motivation for this task is to evaluate
canal for pick-and-place or painting, we ensured that the the performance in general tabletop pick-and-place tasks that
tangent vectors near the ends would be orthogonal to either a could be encountered in daily life. Two demonstrations were
horizontal or vertical plane, and we perform an optimization conducted in a way that the canal is generated to link the two
to smoothen the transition between the aligned disks and the tables, allowing users to guide the robot in placing the objects
non-aligned ones. We start by classifying the canal end Es at their intended goal locations.
as either vertical or horizontal, by computing θ, the average 2) Painting Task: : In this task (see Figure 3 middle row),
angular distance between the mean direction of the last 10 participants are asked to pick up brushes with custom-designed
tangent vectors and the global z axis zG and then align it with holders on a table (pink, blue, and yellow) and use them to
the closest axis. paint as they wish on a paper sheet on the wall in front of the
Object Relocation Canal Data Collection consisted of four joystick buttons: one to start, one to change
the movement direction within the canal, one to open/close the
gripper, and one to stop. Additionally, the left 2D joystick of
the Xbox gamepad was used for providing corrections on the
disks. The controls remained the same for both methods, with
the only difference being the robot’s movements resulting from
Painting

the online mapping mechanism. The training was structured


as a tabletop pick-and-place task, where we placed a can on
Laundry Loading

one table and a paint brush with the custom-designed holder in


an empty container, and introduced the control mechanism to
the participants. During the training phase, the object location
markers for the object relocation task were covered with paper.
Fig. 3: Data collection from participants including the gener- Participants were given 8 minutes to freely explore the controls
ated canals for the three tasks. and observe their effect on the robot. Most participants picked
up the can and placed it on the other table, while some
attempted to grasp the paint brush from the holder and place
robot. This task is inspired from [31] supporting the idea that it in the can on the other table. Afterward, they proceeded to
assistive robots should also support creativity. As such, this complete the tasks in the order previously mentioned. After
task’s evaluation is primarily subjective. Due to the challenge finishing a set of tasks, participants answered a questionnaire
of picking up brushes, participants were instructed to grasp (see next section). This process, including training, tasks, and
the paint brush as best as they could, and the experimenter questionnaire, was followed for the second condition. Finally,
would manually check the brush and ensure it is properly the experiment concluded with a semi-structured interview,
grasped, allowing participants to focus on their painting. debriefing, and compensation.
To switch brushes, participants could drop them on a plate C. Variables
before picking another one. If needed, experimenters would 1) Independent Variables: Our study has two conditions:
refill the container and replace the brush inside, restoring (1) the GeoSACS SA baseline (with the painting canal verti-
it to the original position. The canal was created from two calized), and (2) our approach with additional input mapping
demonstrations to cover the three brush holders and the plate and canal smoothing.
on one side, and the vertical paper sheet on the other side. 2) Dependent Measures - Objective: For both the object
3) Laundry Loading Task: : In the last task, participants relocation and laundry loading tasks, we measured the time
were instructed to load five clothing pieces from a basket to taken to complete the task and the proportion of time spent
a laundry machine (see Figure 3 bottom row). The goal of on providing corrections relative to the total task time. For the
this task is to evaluate the system’s performance in real ADLs object relocation task, our performance metric was the number
with 3D interaction. The positions of the clothing items were of objects correctly positioned at the target location within the
randomized between experiments. Participants were instructed allotted time. For the laundry loading task, we implemented
to push any clothes hanging out of the laundry drum door after a point-based scoring system: participants received two points
their initial attempt to place them inside. The canal was created for each piece of clothing fully placed inside the laundry drum,
from two demonstrations to cover the laundry basket on the one point if part of the clothing was hanging from the laundry
ground and to go inside the laundry machine. drum door, and zero points if the clothing remained in the
B. Procedure basket or fell on the floor where it could no longer be picked
up. Thus, the maximum score a participant could achieve for
The study procedure was reviewed and approved by the
the five clothing pieces was 10 points.
ethics committee of our institution. We employed a within-
3) Dependent Measures - Subjective: At the end of each
subjects design, with the two methods counterbalanced to
condition, we asked participants to complete a 7-point Likert
minimize order effects, and the task order was fixed (object
scale using the USE questionnaire [32], focusing on the
relocation, painting, and laundry loading).
sections for ease of use, ease of learning, and satisfaction, for
At the beginning, participants were given a brief intro-
a total of 22 questions. Additionally, participants completed
duction to the study and the tasks involved, followed by
the NASA-TLX questionnaire [33] to assess the workload
obtaining their informed consent. Participants then completed
associated with each method. A semi-structured interview was
a demographic questionnaire, which included questions on a
conducted at the very end of the study to gather feedback on
5-point Likert scale about their prior experience with robots
usability, efficiency, perceived safety, workload, and overall
and gamepads, such as an Xbox joystick. All participants were
impressions of the method.
compensated CHF 37.5 for their participation in the study, and
the study lasted around one and half hours. D. Hypothesis
We then moved into the training phase, with the goal of Our evaluation consisted of the following two hypotheses
familiarizing participants with the gamepad controls. Our setup (with their associated predictions):
H1: Users will complete tasks efficiently using GeoSACS TABLE I: Completion and correction times for the two meth-
combined with the mapping framework compared to using ods in the object relocation and laundry loading tasks.
GeoSACS alone. GeoSACS Ours
Task Metric p-Val
P1.1: Users will complete object relocation task in less Mean SD Mean SD
time with our method.
Object Completion time 259.02 71.05 220.45 36.60 <0.05
P1.2: Users will complete object relocation task with relocation Correction time 69.29 52.94 54.70 30.22 0.058
fewer corrections using our method.
Laundry Completion time 427.13 69.70 385.84 75.50 0.053
P1.3: Users will position objects more accurately in the loading Correction time 121.43 44.05 107.57 42.64 0.26
object relocation task using our method.
P1.4: Users will achieve higher scores for placing
clothes inside the laundry machine with our method.
the analysis, as they spent excessive time pushing clothes
As we anticipated that not all participants would complete into the machine rather than following the instructions to
the laundry loading task within the allocated time, we excluded first place as many clothes as possible. This participant was
times related to the laundry task from our predictions. identified as an outlier using Tukey’s fences, which justified
H2: GeoSACS combined with the mapping framework will their removal from the final calculation. Together, these results
be more intuitive than GeoSACS alone. provide partial support for hypothesis H1.
P2.1: Users will report higher usability with our method. Regarding subjective measures, for the painting task, all
P2.2: Users will report lower workload with our method. participants preferred our method. Except for one, all partic-
E. Population ipants used all three colors to complete their paintings. With
GeoSACS, participants generally struggled to create anything
The comparative study was conducted with 20 abled par-
meaningful. In contrast, with our method, several participants
ticipants (twelve male, eight female) aged 25 to 48 (Mean =
were able to produce recognizable paintings, such as flowers
30.65, SD = 5.9268), primarily recruited from our research
with a sun in the background, hearts, smiley faces, and even
institute, with one participant recruited externally by word
letters (see Figure 4b). Figure 4a shows participants’ evalua-
of mouth. Our participants were composed of a mixture of
tions of ease of use, ease of learning, and satisfaction for both
technical staff and administrative staff. The mean values for
methods, as well as the ratings across the six dimensions of the
prior experience with robots and gamepads were observed
NASA-TLX. The results indicate that our method consistently
to be 2.05 ± 1.36 and 3.30 ± 0.95, respectively, based on
outperforms GeoSACS across all measured metrics, validating
responses collected using a 5-point Likert scale, where lower
our predictions P2.1 and P2.2, and ultimately supporting H2.
scores indicate less prior experience.
The results from the semi-structured interviews conducted
F. Results at the end of the experiments further reinforced our findings.
The results for the object relocation and laundry loading Every participant preferred our method for all tasks, except for
tasks are presented in Table I. We used a Wilcoxon signed-rank one participant who favored GeoSACS for the laundry loading
test to calculate the p-values, as a Shapiro-Wilk test indicated task. For the intuitiveness of the mapping, several participants
that the data did not follow a normal distribution, except for the remarked, “The second method [ours] was more intuitive”, “I
correction time in the laundry loading task, where the data was liked the first method [ours] very much”, “It is intuitive to use”,
normally distributed, and where we conducted a paired t-test. “It was great”, and “The first one [ours] felt natural way to
These results supported P1.1 but did not support P2.2, despite control”, further confirming the intuitiveness of our mapping
a trend toward significance. A possible explanation, observed framework. Regarding the ease of use of our method, a few
during the studies, is that some participants frequently issued participants commented, “It was easy to use”, ”You don’t even
corrections in both conditions instead of utilizing the robot’s have to think”, and “The first one [ours] was quite easy”.
autonomous routines effectively. Several participants specifically commented, “This is like
Moreover, all 20 participants successfully placed all three playing a game”, “It was easy to use, friendly, and fun”, and
objects using our method, compared to an average score of “It was really fun to use”. These sentiments, coupled with
2.55 (SD = 0.58) for GeoSACS alone, with a Wilcoxon test p- 12 participants selecting “strongly agree” for the question
value below 0.01 (Z = 0), supporting P1.3. The discrepancy in “It is fun to use” in the satisfaction section of the USE
performance stemmed from the trial-and-error approach used questionnaire, highlight that our method is not only effective
with GeoSACS, where, during object picking, the end effector but also enjoyable for users.
would occasionally disturb nearby objects. This made it harder Regarding suggestions, most participants recommended
to grasp these objects later, as some would roll into positions adding a pause button to stop the robot at any point and
where they could no longer be retrieved. then resume it by pressing the same button again. Another
For the laundry loading task, our method achieved a higher common suggestion was to provide control over the robot’s
average score of 9.26 (SD = 0.96) compared to 8.26 (SD execution speed. Several participants proposed the option to
= 0.95) for GeoSACS (Z = 2.262, p < 0.01 (Wilcoxon)), slow down in areas of high uncertainty, such as near tables,
supporting P1.4. One participant’s data was excluded from the laundry basket, or the painting board, which would allow
(a) Participant ratings for ease of use, ease of learning, and overall satisfaction (higher the better (b) Some meaningful paintings
with p-values calculated using paired t-test). Workload was assessed across the six dimensions of the done by participants using our
NASA-TLX scale (lower the better with p-values calculated using Wilcoxon test). [(*) denotes p < method within 8 minutes. They suc-
0.05, (**) denotes p < 0.01, and (***) denotes p < 0.001]. cessfully used all three colors.
Fig. 4: Subjective results from the comparative study conducted with 20 participants.

for more precise task completion. Lastly, a few participants could temporarily walk. One participant also brought their
suggested introducing a separate button to rotate the end partner to the study site. For local ethical reasons, we cannot
effector, enabling them to adjust the grip as needed, rather than report health data of participants. We recruited participants
being limited to the grasp orientation demonstrated initially. from the local community by distributing flyers to local groups
and therapists. Participants were aged 52 to 61 (Mean = 56.67,
VII. ACCESSIBILITY W ORK SD = 3.68) with two females and one male. The study lasted
We then explored how to adapt our method for users with around one hour and half and was compensated with CHF 50.
disabilities, the population for which assistive robots would
B. Procedure
make most sense. For example, we previously used a standard
Xbox gamepad in the comparative study, which may not be The procedure followed was similar to the comparative
well-suited for individuals with disabilities due to its small study, with the same consent process, training, and tasks in
buttons and joysticks limiting accessibility. We first looked the same order. However, we made the following key changes
for wheelchair inspired joysticks, but mostly found limited, for accessibility: (1) we only evaluated our method, (2) we did
costly, and older devices. We then explored the work done not impose any time limit, (3) the experimenter could provide
in the accessible gaming community [34], which has explored some occasional verbal guidelines and encouragements, and
more accessible interfaces for disabled gamers [35], and finally (4) we did not ask them to complete the full questionnaires as
found the recently released Sony Access Controller2 . This it can be tiring and complicated to do, instead we used these
controller is a specialized gamepad with customizable buttons questions to drive our semi-structured interview.
and a larger and easier to manipulate joystick closer to those C. Results and Observations
found in electric wheelchairs, and as such, we decided to use All the wheelchair participants were successful in the relo-
it for our future work. cation task, however for two participants, one object needed
VIII. E XPLORATORY STUDY WITH WHEELCHAIR USERS to be replaced in their initial position after being pushed
accidentally. The painting task proved a bit more challenging,
Based on the results of our comparative study and ac-
participants tended to give themselves short-term objectives
cessibility work, we conducted an exploratory study of our
(e.g., drawing a spiral or filling a rectangle) and achieved
approach with wheelchair users to evaluate it with a popula-
partial success. For the laundry tasks, participants scored 9,
tion closer to our target audience. This served as an initial
9, and 10 without the experimenter help (but the robot had to
evaluation before implementing further design improvements
be restarted once due to a collision and network issue).
in future, which will be guided by a participatory design During the study, one participant commented that the
approach aimed at bringing SA-based assistive robotics to real gamepad reminded them of a wheelchair joystick, which was
users who could benefit from it. the intention behind the selection. Another participant was
A. Population talking to the robot, providing it with encouragement and
instructions. However, a participant also reported challenges to
This study was conducted with 3 participants, all wheelchair
see perspectives, which impacted their use of the robot. During
users, two used manual wheelchairs with one using additional
the debrief session with the experimenter, they suggested that
electric assistance, and one used an electric wheelchair but
having access to the front camera could be useful.
2 Sony Access Controller: https://www.playstation.com/en-gb/accessories/ One participant stated that such a system could already be
access-controller/ useful, especially for laundry, as both loading the machine and
removing clothes from a drying rack are challenging. Others and provided us additional knowledge about the potential
mentioned that it could be useful to lift heavy objects from users benefiting most from the system as well as design
the ground or access things high up. However, all participants improvements.
reported that it would be more useful for people with severe B. Limitations and Future Work
mobility limitations: “with this type of things we could do
1) Limitation of the approach: Despite the proposed
a lot of things, especially for disabled people with [heavier
method allowing some flexibility to move outside the gen-
pathologies]”, another participant even mentioned that they
erated canal, the robot’s motions are still largely confined to
would recommend the study to a friend.
the canal structure. This limitation was particularly noticeable
Two participants mentioned that the system was easy to use,
when objects or clothes accidentally fell into areas outside
and while painting was reported as being more challenging,
the robot’s reach due to the canal’s constraints. Additionally,
they reported they enjoyed it, and one mentioning that it
participants sometimes struggled to maintain a clear view of
would take a few trials to get better at it. The last participant
the task, with their line of sight obstructed by the robot’s body.
had a few more challenges, partially due to the perspective
This issue was especially evident during the laundry and paint-
and a perceived high temporal load, when actions needed
ing tasks, where participants were seen shifting their heads to
to be synchronized: “when it worked for the first time, I
gain a better perspective, which was especially challenging for
was very happy, [...] but when it didn’t work, [I was] more
wheelchair users. Moreover, further refinements are necessary
lost”. They compared their experience with electric wheelchair,
for the system to handle tasks requiring precise manipulation.
stating that it would be useful to adapt the speed to have
The current system also lacks environmental awareness and
fine control when needed. Additionally, they suggested that
still relies on kinesthetic demonstrations.
for home deployments, it would be useful to have a cheat
For future work, we plan to integrate vision-based tech-
sheet with the different buttons.
niques to enable the generation of demonstration-free, lightly
An interesting point is that participants assigned agency
constrained canals, and communicate more information to the
to the robot, talking to it, using expressions like needing to
user, e.g., by displaying the robot view. Building on participant
“give it some time”, “it did what it wanted”, or “[when I was
feedback, we also aim to develop a velocity adjustment
rushing it] it felt like it was losing its memory”. Overall, every
mechanism and an end effector orientation control system,
participant mentioned it was fun and volunteered to participate
which will enhance the robot’s capabilities for more precise
in more studies.
manipulation tasks.
IX. D ISCUSSION 2) Limitation of the study: Our study also contained limita-
tions. First, it only compared our approach to a single baseline
The results from the comparative study indicate that our
and in a short-term interaction. We made this decision as
method holds significant promise for handling complex tasks.
other SA frameworks can be challenging to adapt to complex
Additionally, the paintings created by participants demonstrate
3D tasks as the one explored in this paper. Furthermore, the
the feasibility of our approach for creative and intricate tasks.
comparative study was limited to 20 participants, and we
Moreover, the initial exploratory study with wheelchair users
only recruited 3 wheelchair users in our exploratory study.
highlighted strong engagement and the potential of our method
To enhance the validity of our findings, we plan to involve a
for individuals with disabilities. However, further refinements
broader and more diverse population in our future work.
through participatory design and more rigorous evaluations are Additionally, we plan to conduct more in depth participatory
necessary to fully realize its potential. design with wheelchair users in the future to improve the
A. Insights accessibility, usability, and usefulness of our approach for
this population. Finally, the willingness of our wheelchair
Overall, participants found the method to be flexible and
participants to engage in future research activities is a positive
intuitive to control. During the study, they appeared relaxed
sign for the next steps of this research.
due to the robot’s autonomous behavior, allowing them to in-
tervene only when necessary. Many participants even engaged X. C ONCLUSION
in conversation with the robot, offering comments such as In this paper, we presented the design and evaluation of our
“Bravo Lio”, “No no no”, and “Oh, sorry Lio” which suggests input mapping mechanism for controlling high-DoF robots,
that a more interactive environment could be developed with aimed at enhancing usability of assistive robots. A comparative
feedback mechanisms, including verbal responses from the user study with 20 abled participants across a range of tasks
robot. Interestingly, a few participants initially expressed ap- demonstrated the efficiency, usability, and reduced workload
prehension when introduced to the tasks. However, by the end enabled by our method compared to a baseline. Additionally,
of the study, all participants reported feeling confident about we conducted an exploratory study with 3 wheelchair users,
the robot’s safety and their own. This feedback indicates that utilizing a specialized gamepad with larger buttons and joy-
our SA approach not only enabled convenient robot control, stick similar to those found on wheelchairs. The results from
but also increased user confidence to use such systems. this study, serving as a first step toward a participatory design
Our exploratory study with wheelchair users provided initial approach, suggest that our method shows great potential for
support toward the usability of our system by this population, assisting individuals with disabilities.
R EFERENCES [20] T. Carlson and J. del R. Millan, “Brain-controlled wheelchairs: A robotic
architecture,” IEEE Robotics & Automation Magazine, vol. 20, no. 1, pp.
[1] G. Bardaro, A. Antonini, and E. Motta, “Robots for elderly care in the 65–73, 2013.
home: A landscape analysis and co-design toolkit,” International Journal [21] S. R. Ahmadzadeh, R. Kaushik, and S. Chernova, “Trajectory learning
of Social Robotics, vol. 14, 04 2022. from demonstration with canal surfaces: A parameter-free approach,”
[2] M. Selvaggio, M. Cognetti, S. Nikolaidis, S. Ivaldi, and B. Siciliano, in 2016 IEEE-RAS 16th International Conference on Humanoid Robots
“Autonomy in physical human-robot interaction: A brief survey,” IEEE (Humanoids), 2016, pp. 544–549.
Robotics and Automation Letters, vol. 6, no. 4, pp. 7989–7996, 2021. [22] Y. Wang, P. Praveena, and M. Gleicher, “A design space of control
[3] Y. Cui, S. Karamcheti, R. Palleti, N. Shivakumar, P. Liang, coordinate systems in telemanipulation,” IEEE Access, vol. 12, pp.
and D. Sadigh, “No, to the right: Online language corrections 64 150–64 164, 2024.
for robotic manipulation via shared autonomy,” in Proceedings of [23] P. Praveena, L. Molina, Y. Wang, E. Senft, B. Mutlu, and M. Gleicher,
the 2023 ACM/IEEE International Conference on Human-Robot “Understanding control frames in multi-camera robot telemanipulation,”
Interaction, ser. HRI ’23. New York, NY, USA: Association 03 2022, pp. 432–440.
for Computing Machinery, 2023, p. 93–101. [Online]. Available: [24] B. P. DeJong, J. E. Colgate, and M. A. Peshkin,
https://doi.org/10.1145/3568162.3578623 Mental Transformations in Human-Robot Interaction. Dordrecht:
[4] M. Hagenow, E. Senft, R. Radwin, M. Gleicher, B. Mutlu, and M. Zinn, Springer Netherlands, 2011, pp. 35–51. [Online]. Available:
“Corrective shared autonomy for addressing task variability,” IEEE https://doi.org/10.1007/978-94-007-0582-1 3
Robotics and Automation Letters, vol. 6, no. 2, pp. 3720–3727, 2021. [25] D. Gopinath, M. N. Javaremi, and B. Argall, “Customized handling
[5] S. Rajapakshe, A. Dastenavar, M. Hagenow, J.-M. Odobez, and of unintended interface operation in assistive robots,” in 2021 IEEE
E. Senft, “Geosacs: Geometric shared autonomy via canal surfaces,” International Conference on Robotics and Automation (ICRA), 2021,
2024. [Online]. Available: https://arxiv.org/abs/2404.09584 pp. 10 406–10 412.
[6] D. Hilbert and S. Cohn-Vossen, “Geometry and the Imagination,” [26] R. Ribeiro, J. Ramos, D. Safadinho, and A. M. de Jesus Pereira,
Physics Today, vol. 6, no. 5, pp. 19–19, 05 1953. [Online]. Available: “Uav for everyone : An intuitive control alternative for drone racing
https://doi.org/10.1063/1.3061234 competitions,” in 2018 2nd International Conference on Technology and
[7] M. A. Goodrich and A. C. Schultz, “Human-robot interaction: A survey,” Innovation in Sports, Health and Wellbeing (TISHW), 2018, pp. 1–8.
2008. [27] I. Hussain, L. Meli, C. Pacchierotti, G. Salvietti, and D. Prattichizzo,
[8] J. M. Beer, A. D. Fisk, and W. A. Rogers, “Toward a framework for “Vibrotactile haptic feedback for intuitive control of robotic extra
levels of robot autonomy in human-robot interaction,” J. Hum.-Robot fingers,” in 2015 IEEE World Haptics Conference (WHC), 2015, pp.
Interact., vol. 3, no. 2, p. 74–99, jul 2014. [Online]. Available: 394–399.
https://doi.org/10.5898/JHRI.3.2.Beer [28] M. Li, D. P. Losey, J. Bohg, and D. Sadigh, “Learning user-preferred
[9] S. Javdani, H. Admoni, S. Pellegrinelli, S. Srinivasa, and J. Bagnell, mappings for intuitive robot control,” in 2020 IEEE/RSJ International
“Shared autonomy via hindsight optimization for teleoperation and Conference on Intelligent Robots and Systems (IROS), 2020, pp. 10 960–
teaming,” The International Journal of Robotics Research, vol. 37, 05 10 967.
2017. [29] T. Giorgino, “Computing and visualizing dynamic time warping
[10] D. P. Losey, K. Srinivasan, A. Mandlekar, A. Garg, and D. Sadigh, alignments in r: The dtw package,” Journal of Statistical Software,
“Controlling assistive robots with learned latent actions,” in 2020 IEEE vol. 31, no. 7, p. 1–24, 2009. [Online]. Available: https://www.jstatsoft.
International Conference on Robotics and Automation (ICRA), 2020, pp. org/index.php/jss/article/view/v031i07
378–384. [30] J. Mišeikis, P. Caroni, P. Duchamp, A. Gasser, R. Marko, N. Mišeikienė,
[11] V. R. K. Sumukha Udupa and C. C. Menassa, “Shared autonomy F. Zwilling, C. de Castelbajac, L. Eicher, M. Früh, and H. Früh,
in assistive mobile robots: a review,” Disability and Rehabilitation: “Lio-a personal robot assistant for human-robot interaction and care
Assistive Technology, vol. 18, no. 6, pp. 827–848, 2023, pMID: applications,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp.
34133906. [Online]. Available: https://doi.org/10.1080/17483107.2021. 5339–5346, 2020.
1928778 [31] I. Sheidlower, M. Murdock, E. Bethel, R. M. Aronson, and E. S. Short,
[12] Z. Li, F. Xie, Y. Ye, P. Li, and X. Liu, “A novel 6-dof force-sensed “Online behavior modification for expressive user control of rl-trained
human-robot interface for an intuitive teleoperation,” Chinese Journal robots,” in Proceedings of the 2024 ACM/IEEE International Conference
of Mechanical Engineering, vol. 35, p. 138, 12 2022. on Human-Robot Interaction, 2024, pp. 639–648.
[13] M. Przystupa, K. Johnstonbaugh, Z. Zhang, L. Petrich, M. Dehghan, [32] A. Lund, “Measuring usability with the use questionnaire,” Usability
F. Haghverd, and M. Jagersand, “Learning state conditioned linear and User Experience Newsletter of the STC Usability SIG, vol. 8, 01
mappings for low-dimensional control of robotic manipulators,” in 2023 2001.
IEEE International Conference on Robotics and Automation (ICRA), [33] S. G. Hart and L. E. Staveland, “Development of nasa-tlx (task load
2023, pp. 857–863. index): Results of empirical and theoretical research,” in Human Mental
[14] L. Herlant, R. Holladay, and S. Srinivasa, “Assistive teleoperation of Workload, ser. Advances in Psychology, P. A. Hancock and N. Meshkati,
robot arms via automatic time-optimal mode switching,” vol. 2016, 03 Eds. North-Holland, 1988, vol. 52, pp. 139–183. [Online]. Available:
2016. https://www.sciencedirect.com/science/article/pii/S0166411508623869
[15] C. P. Quintero, O. Ramirez, and M. Jägersand, “Vibi: Assistive vision- [34] D. Grammenos, A. Savidis, and C. Stephanidis, “Designing universally
based interface for robot manipulation,” in 2015 IEEE International accessible games,” Computers in Entertainment (CIE), vol. 7, no. 1, pp.
Conference on Robotics and Automation (ICRA), 2015, pp. 4458–4463. 1–29, 2009.
[16] F. Abi-Farraj, C. Pacchierotti, and P. R. Giordano, “User evaluation of [35] D. Maggiorini, M. Granato, L. A. Ripamonti, M. Marras, and D. Gadia,
a haptic-enabled shared-control approach for robotic telemanipulation,” “Evolution of game controllers: Toward the support of gamers with
in 2018 IEEE/RSJ International Conference on Intelligent Robots and physical disabilities,” in Computer-Human Interaction Research and
Systems (IROS), 2018, pp. 1–9. Applications: First International Conference, CHIRA 2017, Funchal,
[17] L. M. Hiatt and R. Simmons, “Coordinate frames in robotic teleopera- Madeira, Portugal, October 31–November 2, 2017, Revised Selected
tion,” in 2006 IEEE/RSJ International Conference on Intelligent Robots Papers 1. Springer, 2019, pp. 66–89.
and Systems, 2006, pp. 1712–1719.
[18] A. Padmanabha, J. Gupta, C. Chen, J. Yang, V. Nguyen, D. J. Weber,
C. Majidi, and Z. Erickson, “Independence in the home: A wearable
interface for a person with quadriplegia to teleoperate a mobile
manipulator,” in Proceedings of the 2024 ACM/IEEE International
Conference on Human-Robot Interaction, ser. HRI ’24. New York,
NY, USA: Association for Computing Machinery, 2024, p. 542–551.
[Online]. Available: https://doi.org/10.1145/3610977.3634964
[19] B. D. Argall, “Autonomy in rehabilitation robotics: An intersection,”
Annual Review of Control, Robotics, and Autonomous Systems, vol. 1,
pp. 441–463, May 2018.

You might also like