Wu 2019
Wu 2019
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 1
Abstract—We give an universal analytical solution to the hand- gripper-camera transformation for accurate robotic perception
eye calibration problem AX = XB with known matrices A, B and reconstruction. During the past over 30 years, there
and unknown variable X, all in the set of special Euclidean have been a large variety of algorithms solving the hand-
group SE(3). The developed method relies on the 4-dimensional
Procrustes analysis. An unit-octonion representation is proposed eye problem AX = XB. Generally speaking, they can be
for the first time to solve such Procrustes problem through which categorized into two groups. The first group consists of those
an optimal closed-form eigen-decomposition solution is derived. algorithms that calculate the rotation in the first step and then
By virtue of such solution, the uncertainty description of X, being compute the translation part in the second step while in the
a sophisticated problem previously, can be solved in a simpler second group, algorithms compute the rotation and translation
manner. The proposed approach is then verified using simula-
tions and real-world experimentations on an industrial robotic simultaneously. There are quite a lot of methods belonging
arm. The results indicate that it owns better accuracy, better to the very first group that we call them as separated ones
description of uncertainty and consumes much less computation including representatives of rotation-logarithm based ones like
time. Tsai et al. [5], Shiu et al. [6], Park et al. [7], Horaud et
Index Terms—Hand-eye Calibration, Homogenous Transfor- al. [8] and quaternion based one from Chou et al. [9]. The
mation, Least Squares, Quaternions, Octonions simultaneous ones appear in the second group with related
representatives of
I. I NTRODUCTION 1) Analytical solutions: Quaternion-based method by Lu
et al. [10], Dual-quaternion based one by Daniilidis [11],
T HE main hand-eye calibration problem studied in this
paper is aimed to compute the unknown relative homoge-
neous transformation X between robotic gripper and attached
Sylvester-equation based one by Andreff et al. [12], Dual-
tensor based one by Condurache et al. [13].
2) Numerical solutions: Gradient/Newton methods by
camera, whose poses are denoted as A and B respectively
Gwak et al. [14], Linear-matrix-inequality (LMI) based
such that AX = XB. Hand-eye calibration can be solved
one by Heller et al. [15], Alternative-linear-programming
via general solutions to the AX = XB problems or through
based one by Zhao [16], pseudo-inverse based one by
minimizing direct models established using reprojection errors
Zhang et al. [3], [17].
[1]. However, the hand-eye problem AX = XB is not
restricted only to the manipulator-camera calibration. Rather, Each kind of algorithms have their own pros and cons. The
it has been applied to multiple sensor calibration problems separated ones can not produce good enough results with those
including magnetic/inertial ones [2], camera/magnetic ones [3] cases when translation measurements are more accurate than
and other general models [4]. That is to say, the solution of rotation. The simultaneous ones can achieve better optimiza-
AX = XB is more generalized and has broader applications tion performances but may consume large quantity of time
than methods based on reprojection-error minimization. The when using numerical iterations. Some algorithms will also
early study of the hand-eye calibration problem can be dated suffer from their own ill-posed conditions in the presence
back to 1980s when some researchers try to determine the some extreme datasets [18]. What’s more, the uncertainty
description of the X in hand-eye problem AX = XB,
Manuscript received April 24, 2019; revised June 3, 2019 and being an important but difficult problem, has always troubled
June 18, 2019; accepted July 11, 2019. This research was supported
by Shenzhen Science, Technology and Innovation Comission (SZSTI) researchers until the first public general iterative solution by
JCYJ20160401100022706, in part by National Natural Science Foundation Nguyen et al. in 2018 [19]. An intuitive overview of these
of China under the grants of No. U1713211 and 41604025. The Associate algorithms in the order of publication time can be found out
Editor coordinating the review process was XXX. (Corresponding author:
Ming Liu) in Table I.
J. Wu, Y. Sun and M. Liu are with Department of Electronic and Computer Till now, hand-eye calibration has accelerated the develop-
Engineering, Hong Kong University of Science and Technology, Hong Kong, ment of robotics communities according to it various usages
China. (e-mail: jin wu [email protected]; [email protected]; eel-
[email protected]). in sensor calibration and motion sensing [20], [21]. Although
M. Wang is with Department of Electronic and Computer Engineering, it has been quite a long time since the first proposal of
Western University, London, Ontario, Canada. (e-mail: [email protected]). hand-eye calibration, the researches around it are still very
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. popular. There is still a remaining problem that no algorithm
Digital Object Identifier XXX can simultaneously estimate the X in AX = XB while
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 2
TABLE I
C OMPARISONS BETWEEN R ELATED M ETHODS
Methods Type Parameterization or Basic Tools Computation Speed Accuracy Has Uncertainty Description?
Tsai et al. 1989 [5] Separated, Analytical Rotation Logarithms High Medium No
Shiu et al. 1989 [6] Separated, Analytical Rotation Logarithms Low Low No
Park et al. 1994 [7] Separated, Analytical Rotation Logarithms, SVD Medium Medium No
Horaud 1995 [8] Separated, Analytical Rotation Logarithms, Eigen-decomposition High Medium No
Chou et al. 1991 [9] Separated, Analytical Quaternion, SVD High Medium No
Daniilidis 1999 [11] Simultaneous, Analytical Dual Quaternion, SVD Medium Medium No
Andreff et al. 2001 [12] Simultaneous, Analytical Sylverster Equation, Kronecker Product High Medium No
Lu et al. 2002 [10] Simultaneous, Analytical Quaternion, SVD Low Medium No
Gwak et al. 2003 [14] Simultaneous, Optimization Gradient/Newton Method Very Low High No
Heller et al. 2014 [15] Simultaneous, Optimization Quaternion, Dual Quaternion, LMI Very Low High No
Condurache et al. 2016 [13] Simultaneous, Analytical Dual Tensor, SVD or QR Decomposition Medium Medium No
Zhang et al. 2017 [17] Simultaneous, Optimization Dual Quaternion, Pseudo Inverse Medium Medium No
Zhao 2018 [16] Simultaneous, Optimization Dual Quaternion, Alternative Linear Programming Very Low High No
Nguyen et al. 2018 [19] Separated, Optimization Rotation Iteration Very Low High Yes
preserving highly accurate uncertainty descriptions and con- II. P ROBLEM F ORMULATION
suming extremely low computation time. These difficulties are We start this section by first defining some important
rather practical since in the hand-eye problem AX = XB, notations in this paper that are mostly inherited from [34].
the rotation and translation parts are tightly coupled with high The n-dimensional real Euclidean space is represented by Rn
nonlinearity, which motivates Nguyen et al. to derive the first- which further generates the matrix space Rm×n containing all
order approximation of the error covariance propagation. It real matrices with m rows and n columns. All n-dimensional
is also the presented nonlinearity that makes the numerical rotation matrices belong to the special orthogonal group
iterations much slower. SO(n) := {R ∈ Rn×n |RT R = I, det(R) = 1} where
To overcome the current algorithmic shortcomings, in this I denotes the identity matrix with proper size. The special
paper, we study a new 4-dimensional (4D) Procrustes anal- Euclidean space is composed of a rotation matrix R and a
ysis tool for representation of homogeneous transformations. translational vector t such that
Understanding the manifolds has become a popular way for
modern interior analysis of various data flows [22]. The R t
SE(n) := T = |R ∈ SO(n), t ∈ Rn (1)
geometric descriptions of these manifolds have always been 0 1
vital, which, are usually addressed with the Procrustes analysis with 0 denoting the zeros matrix with adequate dimensions.
that extracts the rigid, affine or non-rigid geometric mappings The Euclidean norm of p a given squared matrix X will be
between datasets [23], [24]. Early researches on Procrustes defined with kXk = tr (X T X) where tr denotes the
analysis have been conducted since 1930s [25], [26], [27] and matrix trace. The vectorization of an arbitrary matrix X is
later generalized solutions are applied to spacecraft attitude defined as vec(X) and ⊗ represents the kronecker product
determination [28], [29], image registration [30], [31], laser between two matrices. For a given arbitrary matrix X, X † is
scan matching using iterative closest points (ICP) [32], [33] called its Moore-Penrose generalized inverse. Any rotation R
and etc. Motivated by these technological advances, this paper on the SO(3) has its corresponding logarithm given by
has the following contributions:
φ
log(R) = (R − RT ) (2)
2 sin φ
1) We show some analytical results to the 4D Procrustes
in which 1 + 2 cos φ = tr(R). Given a 3D vector x =
analysis in Section III and apply them to the solution of
(x1 , x2 , x3 )T , its associated skew-symmetric matrix is
hand-eye calibration problem detailed in Section II.
2) Since all variables are directly propagated into final 0 −x3 x2
results, the solving process is quite simple and computa- [x]× = x3 0 −x1 (3)
tionally efficient. −x2 x1 0
3) Also, as the proposed solution is in the form of the
spectrum decomposition of a 4×4 matrix, the closed-form satisfying x × y = [x]× y = −[y]× x where y is also an
probabilistic information is given precisely and flexibly arbitrary 3D vector. The inverse map from the skew-symmetric
for the first time using some recent results in automatic matrix to the 3D vector is denoted as [x]∧
× = x.
control. Now let us describe the main problem in this paper. Given
two measurement sets
A = {Ai |Ai ∈ SE(3), i = 1, 2, · · · , N }
Finally, via simulations and real-world robotic experiments (4)
B = {Bi |Bi ∈ SE(3), i = 1, 2, · · · , N }
in Section IV, the proposed method is evaluated to own
better potential accuracy, computational loads and uncertainty consider the hand-eye calibration least square:
descriptions. Detailed comparisons are also shown to reveal N
the sensitivity of the proposed method subject to input noise X 2
arg min J = kAi X − XBi k (5)
and different parameter values. X∈SE(3) i=1
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 3
where Ai and Bi come to the reality using poses in two Note that (10) is in fact a rigid 3D registration problem which
successive measurements (also see Fig. 1 in [11]) can be solved instantly with singular value decomposition
−1 (SVD) or eigen-decomposition [29], [32]. However, the so-
Ai = TAi+1 TA
i
(6) lution of Park et al. does not take the translation into account
−1
Bi = T B T
i+1 Bi
for RX while the accuracy of RX is actually affected by tX .
Therefore, there are some other methods trying to compute
with TAi being the i-th camera pose with respect to the RX and tX simultaneously [11], [12]. While these methods
standard objects in world frame and fix the remaining problem of Park et al., they may not achieve
TBi = TBi,3 TBi,2 TBi,1 (7) global minimum as the optimization
N 2
!
are gripper poses with respect to the robotic base, in which
X kRX ai − bi k +
arg min 2
TBi,1 , TBi,2 , TBi,3 are transformations between joints of RX ∈SO(3),tX ∈R3 i=1 kRX tB + tX − RA tX − tA k
robotic arms. The relationship between these homogeneous (12)
transformations can be found out in Fig. 1. The task for us in is not always convex. Hence, iterative numerical methods are
the remainder of this paper is to give a closed-form solution proposed to achieve the globally optimal estimates of RX and
of X considering rotation and translation simultaneously and tX , including solvers in [3], [14], [16], [17]. In the following
moreover, derive the uncertainty description of X. section, we show a new analytical perspective for hand-eye
Let us write A, B into calibration problem AX = XB using the proposed 4D
Procrustes analysis.
RA tA RB tB
A= ,B = (8)
0 1 0 1
III. 4D P ROCRUSTES A NALYSIS
Then one easily obtains The whole results in this section are proposed for the first
time solving specific 4D Procrustes analysis problems. The
RA RX = RX RB
(9) developed approach is therefore named as the 4DPA method
RA tX + tA = RX tB + tX
for simplicity in later sections.
The method by Park et al. [7] first computes RX from the
first equation of (9) and then solves tX by inserting RX into
A. Some New Analytical Results
the second sub-equation. The Park’s step for computing RX
is tantamount to the following optimization Problem 1: Let {U} = {ui ∈ R4 }, {V} = {vi ∈ R4 }
where i = 1, 2, · · · , N, N ≥ 3 be two point sets in which the
N
X 2 correspondences are well matched such that ui corresponds
arg min kRX ai − bi k (10)
RX ∈SO(3) i=1 exactly to vi . Find the 4D rotation R and translation vector
t such that
with N
ai = [log (RAi )]∧ X 2
(11) arg min kRui + t − vi k (13)
bi = [log (RBi )]∧ R∈SO(4),t∈R4 i=1
Fig. 1. The relationship between various homogeneous transformations for gripper-camera hand-eye calibration.
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 4
H11 + H22 + H33 + H44 H12 − H21 − H34 + H43 H13 + H24 − H31 − H42 H14 − H23 + H32 − H41
H12 − H21 + H34 − H43 H33 − H22 − H11 + H44 H14 − H23 − H32 + H41 −H13 − H24 − H31 − H42
K = H −H −H +H −H14 − H23 − H32 − H41 H22 − H11 − H33 + H44 H12 + H21 − H34 − H43
(19)
13 24 31 42
H14 + H23 − H32 − H41 H13 − H24 + H31 − H42 −H12 − H21 − H34 − H43 H22 − H11 + H33 − H44
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 5
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 6
T
∂ (Jjk,i Jjk,i )
Then the interpolation can be solved using solution to the ∂b can be intuitively conducted using analytical
T
Problem 1 by letting H = [wEi + (1 − w) Ei+1 ] . After the forms of matrices in the Appendix A and this part of work
interpolation, a new interpolated set {Ẽ} can be established is left for the audience of this paper. The covariance of qR
that well corresponds to {Z} and R can be solved via the can therefore be computed by replacing F11 with F22 in (45).
solution to Problem 2. The cross-covariance between qL and qR can also be given
as follows
T
B. Uncertainty Descriptions ΣδqL δqR = δqL δqR
In this sub-section, we use x̂ for representing the true value qL ⊗ (F11 − λF11 ,min I)† δmL
* T +
of the noise-disturbed vector x. The expectation is denoted = T
δmTR qR ⊗ (F22 , λF22 ,min I)†
T
using h· · · i [29], [45]. All the errors in this sub-section are (46)
†
T
assumed to be zero-mean which can be found out in [29]. = qL ⊗ (F11 − λF11 ,min I) ΣδmL δmR
As all the solutions provided in the last sub-section are all T
qR ⊗ (F22 , λF22 ,min I)†
T
in the spectrum-decomposition form, the errors of the derived
quaternions can be given by the perturbation theory [46]. In a in which mL = vec(F11 ), mR = vec(F22 ) and ΣδmL δmR is
recent error analysis for the attitude determination from vector given by
observations by Chang et al. [47], the first-order error of the ∂mL
∂mR
T
estimated deterministic quaternion q is ΣδmL δmR = Σb (47)
∂b ∂b
δq = q T ⊗ (λmax I − M )† δm
(39) Eventually, the covariance of the octonion σ will be
provided that m = vec(M ), λmax being the maximum ΣδqL ΣδqL δqR
Σσ = (48)
eigenvalue of the real symmetric matrix M ΣδqR δqL ΣδqR
M q = λmax q (40)
C. Solving AX = XB from SO(4) Perspective
The above quaternion error is presented given the assumption The SO(4) parameterization of SE(3) is presented by
that δq is multiplicative such that Thomas in [37] that
q̂ = δq q (41) R t F1 R εt
T = ←→ RT ,SO(4) = (49)
0 1 F1−1 εtT R 1
where denotes the quaternion product. The following con-
tents will discuss the covariance expressions for this type of in which ε denotes the dual unit that enables ε2 = 0. The
quaternion error. right part of (49) is on SO(4) and a practical method for
Using (39), we have the following quaternion error covari- approaching the corresponding homogeneous transformation
ance is that we can choose very tiny numbers for ε = 1/d where
d 1 is a positive scaling factor, to generate real-number
D E
Σδq = δqδq T
h i h iT
approximation of RT ,SO(4) :
T † T T †
= q ⊗ (λmax I − M ) δmδm q ⊗ (λmax I − M ) 1
R t
RT ,SO(4) ≈ 1 T
d (50)
dt R 1
h i h iT
= q T ⊗ (λmax I − M )† Σδm q T ⊗ (λmax I − M )†
(42) It is also noted that the mapping in (49) is not
in which unique. For instance, the following mapping also holds for
∂m
∂m
T RTT ,SO(4) RT ,SO(4) = RT ,SO(4) RTT ,SO(4) = I when d 1:
Σδm = Σb (43)
∂b ∂b
R t F2 R εt
T = ←→ RT ,SO(4) = (51)
where b denotes all input variables contributed to the final 0 1 F2−1 −εtT R 1
form of M . Let us take the solution to Problem 2 for example.
The convenience of such mapping from SE(3) to SO(4) is
For qL , we have
that some nonlinear equations on SE(3) can be turned to linear
F11 qL = λF ,min qL = λF11 ,min qL (44) ones on SO(4). Choosing a scaling factor d makes an approx-
imation of homogeneous transformation on SO(4). Then the
which yields that conventional hand-eye calibration problem AX = XB can
M = −F11 , λmax = −λF11 ,min be shifted to
N
m = −vec(F11 ) X 2
arg min J = kAi X − XBi k
N X4 X4 ∂vec J T
∂m 1 X jk,i Jjk,i (45) X∈SE(3) i=1
=− ⇒ arg min J =
∂b N i=1 j=1 ∂b
k=1 RX,SO(4) ∈SO(4)
T
b = vec(Ei )T , vec(Zi )T
N
RA ,SO(4) RX,SO(4) − RX,SO(4) RB ,SO(4)
2
X
i i
where we assume that b for every pair of {Ei , Zi } have i=1
the same probabilistic distribution. The computation of (52)
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 7
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 8
IV. E XPERIMENTAL R ESULTS the following projection model for the utilized camera
lcam,1,j
A. Experiments on A Robotic Arm lcam,j = ,
lcam,2,j
lcam,1,j T (60)
The first category of experiments are conducted for an
lcam,2,j = O Lcam,1,j , Lcam,2,j , 1
gripper-camera hand-eye calibration depicted in Fig. 1. The Lcam,3,j Lcam,3,j
dataset is generated using an UR10 robotic arm and an Intel 1
Realsense D435i camera attached firmly to the end-effector where lcam,j denotes the j-th measured feature points (corner)
(gripper) of the robotic arm (see Fig. 2). of the chessboard in the camera imaging frame; O is the
matrix accounting for the intrinsic parameters of the camera;
T
Lcam,j = (Lcam,1,j , Lcam,2,j , Lcam,3,j ) is the projected j-th
feature point in the camera ego-motion frame. To obtain the
i-th pose between the camera and chessboard, we can relate
the standard point coordinates of the chessboard Lchess,j for
j = 1, 2, · · · in the world frame from a certain model with
that in the camera frame by
Lchess,j = TAi Lcam,j (61)
By minimizing the projection errors from (61), TAi will
be obtained with nonlinear optimization techniques e.g. the
Perspective-n-Point algorithm [48], [49]. In our experiment,
the scale-invariant feature transform (SIFT) is invoked for ex-
traction of corner points of the chessboard [50]. We use several
Fig. 2. The gripper-camera hand-eye calibration experiment. datasets captured from our platform to produce comparisons
with representatives including classical ones of Tsai et al. [5],
Chou et al. [9], Park et al. [7], Daniilidis [11], Andreff et al.
The UR10 robotic arm can give accurate outputs of homoge- [12] and recent ones of Heller et al. [15], Zhang et al. [17],
neous transformations of various joints relative to its base. The Zhao [16]. The error of the hand-eye calibration is defined as
D435i camera contains color, depth and fisheye sub-cameras follows
along with and inertial measurement unit. In this sub-section v
uN
the transformation of the end-effector TBi of the robotic arm 1u X 2
is computed using those transformations from all joints via Error = t kAi X − XBi k (62)
N i=1
(7). We only pick up the color images from D435i to obtain
the transformation of the camera with respect to the 12 × 9 where Ai and Bi are detailed in (6). All the timing statistics,
chessboard. Note that in Fig. 1, the standard objects can be computation and visualization are carried out on a MacBook
arbitrary ones with certain pre-known models e.g. point cloud Pro 2017 with i7-3.5Ghz CPU along with the MATLAB
model and computer-aided-design (CAD) model. The D435i is r2018a software. All the algorithms are implemented using
factory-calibrated for its intrinsic parameters and we construct the least coding resources. We employ YALMIP to solve the
TABLE II
C OMPARISONS WITH CLASSICAL METHODS FOR GRIPPER - CAMERA HAND - EYE CALIBRATION .
Cases Tsai 1989 [5] Chou 1991 [9] Park 1994 [7] Daniilidis 1999 [11] Andreff 2001 [12] Proposed 4DPA 2019
Error Time (s) Error Time (s) Error Time (s) Error Time (s) Error Time (s) Error Time (s)
1 (224) 1.2046 × 10−02 4.0506 × 10−02 6.8255 × 10−03 1.0190 × 10−02 6.8254 × 10−03 3.5959 × 10−02 6.7082 × 10−03 3.8308 × 10−02 7.2650 × 10−03 5.7292 × 10−03 6.7857 × 10−03 4.4819 × 10−03
2 (253) 1.0243 × 10−02 4.2952 × 10−02 5.8650 × 10−03 1.1760 × 10−02 5.8650 × 10−03 3.9486 × 10−02 5.7704 × 10−03 4.1492 × 10−02 5.8754 × 10−03 6.8972 × 10−03 5.8290 × 10−03 4.8991 × 10−03
3 (298) 8.0653 × 10−03 4.9823 × 10−02 5.0514 × 10−03 1.3105 × 10−02 5.0517 × 10−03 4.6559 × 10−02 4.9084 × 10 −03 4.8626 × 10−02 5.4648 × 10 −03 7.2326 × 10−03 4.9803 × 10 −03 5.5470 × 10−03
4 (342) 6.8136 × 10−03 5.3854 × 10−02 4.7192 × 10−03 1.4585 × 10−02 4.7254 × 10−03 5.6970 × 10−02 4.0379 × 10−03 5.3721 × 10−02 4.8374 × 10−03 6.7880 × 10−03 4.0207 × 10−03 5.5404 × 10−03
5 (392) 5.5242 × 10−03 6.3039 × 10−02 3.4047 × 10−03 1.7253 × 10−02 3.4012 × 10−03 6.0697 × 10−02 3.3850 × 10−03 6.4505 × 10−02 4.0410 × 10−03 8.9159 × 10−03 3.1678 × 10−03 7.3957 × 10−03
6 (433) 4.8072 × 10−03 6.7117 × 10−02 2.9723 × 10−03 1.8354 × 10−02 2.9694 × 10−03 6.9967 × 10−02 2.8957 × 10−03 6.5703 × 10−02 3.6410 × 10−03 9.8837 × 10−03 2.7890 × 10−03 7.8954 × 10−03
7 (470) 4.3853 × 10−03 7.6869 × 10−02 2.7302 × 10−03 1.9827 × 10−02 2.7270 × 10−03 1.9827 × 10−02 2.6640 × 10−03 7.5553 × 10−02 3.3262 × 10−03 1.0373 × 10−02 2.5397 × 10−03 8.1632 × 10−03
8 (500) 4.0938 × 10−03 8.5545 × 10−02 2.5855 × 10−03 2.1506 × 10−02 2.5807 × 10−03 7.5722 × 10−02 2.4610 × 10−03 7.7225 × 10−02 3.0083 × 10−03 1.2008 × 10−02 2.3137 × 10−03 8.5382 × 10−03
TABLE III
C OMPARISONS WITH RECENT METHODS FOR GRIPPER - CAMERA HAND - EYE CALIBRATION .
Cases Heller 2014 [15] Zhang 2017 [17] Zhao 2018 [16] Proposed 4DPA 2019
Error Time (s) Error Time (s) Error Time (s) Error Time (s)
1 (224) 7.9803 × 10−03 1.4586 × 10−02 7.6125 × 10−03 8.8201 × 10−03 7.1485 × 10−03 8.0861 × 10−02 6.7857 × 10−03 4.4819 × 10−03
2 (253) 6.7894 × 10−03 1.5661 × 10−02 6.7908 × 10−03 9.7096 × 10−03 6.1710 × 10−03 1.0787 × 10−01 5.8290 × 10−03 4.8991 × 10−03
3 (298) 5.6508 × 10−03 1.8173 × 10−02 6.0203 × 10−03 1.2561 × 10−02 5.3021 × 10−03 1.4893 × 10−01 4.9803 × 10−03 5.5470 × 10−03
4 (342) 4.7192 × 10−03 2.2755 × 10−02 5.2743 × 10−03 1.5332 × 10−02 4.4003 × 10−03 2.2291 × 10−01 4.0207 × 10−03 5.5404 × 10−03
5 (392) 3.8706 × 10−03 2.5399 × 10−02 4.3269 × 10−03 1.8800 × 10−02 3.6041 × 10−03 3.0347 × 10−01 3.1678 × 10−03 7.3957 × 10−03
6 (433) 3.3504 × 10−03 2.6373 × 10−02 3.9732 × 10−03 2.1613 × 10−02 3.1658 × 10−03 3.8768 × 10−01 2.7890 × 10−03 7.8954 × 10−03
7 (470) 3.0547 × 10−03 2.7973 × 10−02 3.3633 × 10−03 2.5498 × 10−02 2.8912 × 10−03 5.0153 × 10−01 2.5397 × 10−03 8.1632 × 10−03
8 (500) 2.8862 × 10−03 2.9879 × 10−02 3.0215 × 10−03 3.2041 × 10−02 2.6871 × 10−03 5.8727 × 10−01 2.3137 × 10−03 8.5382 × 10−03
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 9
LMI dqhec optimization in the method of Heller et al. [15]. Acknowledgement). Every researcher can freely download this
For Zhao’s method [16], we invoke the fmincon function in dataset and evaluate the accuracy and computational efficiency.
MATLAB for numerical solution. The advantages of the developed method on both precision and
The robotic arm is rigidly installed on the testing table computation time will lead to very effective implementations
and is operated smoothly and periodically to capture the of hand-eye calibration for industrial applications in the future.
images of the chessboard from various directions. Using the
mechanisms described above we form the series of {A}, {B}.
B. The Error Sensitivity to the Noises and Different Parame-
We select d = 104 as scaling factor for evaluation in this
ters of d
sub-section as the translational components are all within
[−2, 2] m and in such range the camera has the empirical In this sub-section, we study the sensitivity of the proposed
positioning accuracy of about 0.05 ∼ 0.2 m. We choose the method subject to input measurement noises. We define the
F2 mapping in (51) for conversion from SE(3) to SO(4) since noise corrupted models of rotations as
in real applications it obtains much more accurate hand-eye
calibration results than the F1 mapping (49) presented in [37]. RA,i = R̂A,i + ErrorRX R1
(63)
The scalar thresholds for the other numerical methods are all RB,i = R̂B,i + ErrorRX R2
set to 1 × 10−15 to guarantee the accuracy. We conduct 8
where R1 , R2 are random matrices whose columns subject to
groups of the experiments using the experimental platform.
Gaussian distribution with covariances of I and ErrorRX is
The errors and computation timespans are processed 100 times
a scalar accounting for the rotation error. Likewise, the noise
for averaged performance evaluation, which are provided in
models of translations can be given by
Table II and III. The least errors are marked using the green
color and the best ones for computation time are tagged in tA,i = t̂A,i + ErrortX T1
blue. The statistics of the proposed method are marked bold in (64)
tB,i = t̂B,i + ErrortX T2
the tables for emphasis. The digits after the case serial numbers
indicate the sample counts for the studied case. with noise scale of ErrortX and T1 , T2 noise vectors subject to
One can see that with growing sample counts, all the meth- normal distribution also with covariance of I. The perturbed
ods obtain more accurate estimates of X. While with larger rotations are orthonormalized after the addition of the noises.
quantities of measurements, the processing speeds for the algo- Here, the Gaussian distribution is selected by following the
rithms become slower. However, among all methods, despite tradition in [19] since this assumption of distribution covers
they are analytical or iterative, the proposed SO(4) method most cases that we may encounter in the real-world applica-
almost always gives the most accurate results within the least tions.
computation time. The reason is that the proposed 4DPA We take all the compared representatives from last sub-
computes the rotation and translation in X simultaneously and section to this part by adding three more ones of the proposed
well optimizes the loss function J of the hand-eye calibration method with different d of d = 103 , d = 105 and d = 106 . The
problem. The proposed algorithm can obtain better results than we can both see the comparisons with the representatives and
almost all other analytical and numerical ones, except for the observe the influence of the positive scaling factor d. Several
cases 1 ∼ 3 using the method of Daniilidis. This indicates simulations are conducted where we generate datasets of A, B
that with few samples, the accuracy of the proposed 4DPA is with N = 1000 and the obtained results are averaged for
lower than the method of Daniilidis but is still close. However, 10 times. We independently evaluate the effect of ErrorRX
few samples indicate relative low confidence in calibration and ErrortX imposed on Error. The relationship between
accuracy and for cases with higher quantities of measurements ErrorRX and Error is depicted in Fig. 3 while that between
the proposed 4DPA method is always the best. This shows that ErrortX and Error is presented in Fig. 4. These relationships
the designed 4D Procrustes analysis for mapping from SE(3) are demonstrated in the form of the log plot. We can see
to SO(4) is more efficient than other tools e.g. the mappings that with increasing errors in rotation and translation, the errors
based on dual quaternion [11] and Kronecker product [12]. in the computed X all arise to a large extent. One can see in
Furthermore, our method uses the eigen-decomposition as the the magnified plot of the Fig. 3 that the optimization methods
solution that is regarded as a robust tool for engineering achieves the best accuracy but the proposed method can also
problems. Our method can reduce the estimation error to obtain comparable estimates. It is shown that with various
about 3.06% ∼ 94.01% of original stats compared with values of d, the performances of the proposed method differ
classical algorithms and 0.39% ∼ 86.11% compared with quite a lot. Fig. 4 indicates that with d = 103 , the evaluated
recent numerical ones. The proposed method is also free of errors on translation are the worst among all compared ones.
pre-processing techniques like quaternion conversion in other However, with larger d, this situation has been significantly
algorithms. All the matrix operations are simple and intuitive improved, generating the magnified image in Fig. 4 that when
which makes the computation very fast. Our method can d = 105 and d = 106 , the errors of the proposed method
lower the computation time to about 9.98% ∼ 70.68% of are quite close to the least ones. As in last sub-section we
original stats compared with classical analytical algorithms have tested that the proposed method has the fastest execution
and 1.45% ∼ 28.58% compared with recent numerical ones. speed, it is shown that for the studied cases the developed
A synchronized sequence of camera-chessboard poses and method can be regarded as a balancing one between the
end-effector poses is made open-source (see the links in the accuracy and computation speed.
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 10
T
δRX,SO(4) δRX,SO(4)
T T T
δRX δtX /d δRX −RX δtX /d − δRX tX /d
= T T
−δtX RX /d − tX δRX /d 0 δtTX /d 0 (65)
T 1 T 1 T T
δRX δRX + d2 δtX δtX − d
δRX RX δtX + δRX δRX tX
=
− d1 δtTX RX δRXT
+ tTX δRX δRXT 1
tTX RX RX
T
δtX + tTX RX δRXT
tX + tTX δRX RX
T
δtX + tTX δRX δRX
T
d2
tX
D E
T
ΣRX,SO(4) = δRX,SO(4) δRX,SO(4)
ΣRX + d12 ΣtX − d1 (hδtX × δθX i + ΣRX tX )
= 1 T 1 T T (66)
− (hδtX × δθX i + ΣRX tX ) 2 tX hδtX × δθX i + tX ΣRX tX
d 1 1
d
ΣRX + d2 ΣtX − d ΣRX tX
≈
− d1 tTX ΣRX 1 T
t Σ
d2 X RX X
t
T
and δRX,SO(4) δRX,SO(4) is simplified from (65) to (66)
according to [51]
ṘX = −[ω]× RX
δRX = −[δθX ]× RX (68)
T
δRX RX = −[δθX ]×
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 11
Fig. 5. The 2D covariance projections of the solutions to hand-eye calibration using the proposed method and that of Nguyen et al. The dashed grey lines
indicate the mean bounds of the simulated statistics. The green dashed lines are from the solution of Nguyen et al. while the solid black ones are from our
proposed algorithm. The discrete points in blue reflect the simulated samples.
./examples/test_axxb_covariance.py). The input co- platform contains a high-end 2D laser scanner of Hokuyo
variances are UST 10-lx spinned by a powerful Dynamixel MX-28T servo
ΣRA = 10−10 I, ΣtA = 10−10 I controlled through the serial ports by the onboard Nvidia TX1
4.15625 −2.88693 −0.60653
computer with graphics processing unit (GPU). It also consists
ΣRB = −2.88693 32.0952 −0.14482 × 10−5 of an Intel Realsense T265 fisheye camera with resolution
−0.60653 −0.14482 1.43937 (70) of 848x800 and frame rate of 30fps, along with an onboard
19.52937 2.12627 −1.06675
factory-calibrated inertial measurement unit (IMU). The spin
ΣtB = 2.12627 4.44314426 0.38679 × 10−5 mechanism and the feedback of internal encoder of the servo
−1.06675 0.38679 2.13070 guarantee the seamless stitching of successive laser scans that
The simulation is carried out for 10000 times, generating produce highly accurate 3D scene reconstructions.
the randomly perturbed {A}, {B} and in each set there are
60 measurements. Σb is computed according to the simulated
statistics for {A}, {B}. The statistical covariance bounds
of the estimated RX and tX are then logged. Using the
method by Nguyen et al. and our proposed method, the 2D
covariance projections are plotted in Fig. 5. One can see
that the both methods can estimate the covariance correctly
while our method achieves very slightly smaller covariances
bounds. This reflects that our proposed method has reached the
accuracy of Nguyen et al. for uncertainty descriptions. What
needs to be pointed out is that the proposed method is fully
analytical rather than the iterative solution in the method of
Nguyen et al. While analytical methods are always much faster
than iterative ones, this simulation has indirectly reflected that
the proposed method can both correctly estimate the trans-
formation and determine the precision covariance information
within short computational period, which is beneficial to those
applications with high demands on real-time system with
rigorous scheduling logics and timing.
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 12
Fig. 6. The reconstructed scenes using the presented 3D laser scanner. They are later used for pose estimation of the laser scanner frame with ICP.
The single sensor or combination of laser scanner and as indicated in Fig. 6. As the camera and laser-scanner poses
camera will be of great importance for scene measurement have the output frequencies of 200Hz and 1Hz respectively,
and reconstruction [52], [53], [54]. However, due to inevitable the synchronization between them is conducted by continuous
installation misalignments, the extrinsic calibration between linear quaternion interpolation that we developed recently [43].
the laser scanner and camera should be performed for reliable Then using the properly synced TAi and TBi , we are able to
measurement accuracy. Several algorithms have been proposed form the proposed hand-eye calibration principle with entry
to deal with the calibration issues inside these sensors recently point equation in (6). With procedures shown in Algorithm 1
[55], [56], [57]. These methods in fact require some standard where d is set to d = 105 empirically, the extrinsic parameters
objects like large chessboards to obtain satisfactory results. We i.e. the rotation and translation between the laser scanner and
here extend our method to solving this extrinsic calibration, fisheye camera, are calculated.
without needs of any other standard reference systems. The
sensor signal flowchart can be seen from Fig. 8.
Fig. 8. The signal flowchart in the extrinsic calibration between the 3D laser
scanner and fisheye camera using the proposed algorithm.
TABLE IV
T RAJECTORY E RRORS B EFORE AND A FTER THE E XTRINSIC C ALIBRATION U SING T HE P ROPOSED M ETHOD
Experiment Before: X(m) After: X(m) Before: Y(m) After: Y(m) Before: Z(m) After: Z(m)
1 1.997 0.725 1.803 0.476 0.828 0.379
2 2.278 1.080 1.722 0.997 1.322 0.763
3 1.463 0.605 1.825 0.583 1.199 0.691
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 13
V. C ONCLUSION 1 r −s −p −q −c −d a −b
Q4 (σ) = √
2 −q p −s −r b −a −d −c
This paper studies the classical hand-eye calibration prob- p q r −s a b c −d
lem AX = XB by exploiting a new generalized method on The results of Jjk,i can then be computed using symbolic
SO(4). The investigated 4D Procrustes analysis provides us computation tools e.g. MATLAB and Mathematica:
with very useful closed-form results for hand-eye calibration.
J11,i =
With such framework, the uncertainty descriptions of the
e11 − z11 e12 + z21 e13 + z31 e14 + z41
obtained transformations can be easily computed. It is verified e12 + z21 z11 − e11 e14 − z41 z31 − e13
that the proposed method can achieve better accuracy and e +z
13 31 z41 − e14 z11 − e11 e12 − z21
much less computation time than representatives in real-world e14 + z41 e13 − z31 z21 − e12 z11 − e11
datasets. The proposed uncertainty descriptions for the 4 × 4 J12,i =
matrices are also universal to other similar problems like
e21 − z21 e22 − z11 e23 + z41 e24 − z31
spacecraft attitude determination [29] and 3D registration [32]. e22 − z11 z21 − e21 e24 + z31 z41 − e23
e −z z31 − e24 −e21 − z21 e22 − z11
We also notice that the Procrustes analysis on SO(n) may be of 23 41
benefit to solve the generalized hand-eye problem AX = XB e24 + z31 e23 + z41 z11 − e22 −e21 − z21
in which SE(n) and this is going to be discussed in the next J13,i =
e31 − z31 e32 − z41 e33 − z11 e34 + z21
task for us in further works.
e32 + z41 −e31 − z31 e34 + z21 z11 − e33
e −z z21 − e34 z31 − e31 e32 + z41
33 11
A PPENDIX A e34 − z21 e33 − z11 z41 − e32 −e31 − z31
S OME C LOSED - FORM R ESULTS J14,i =
A. Analytical Forms of Some Fundamental Matrices e41 − z41 e42 + z31 e43 − z21 e44 − z11
e42 − z31 −e41 − z41 e44 − z11 z21 − e43
Taking c1 = P1 (σ)σ as an example, one can explicitly e +z z11 − e44 −e41 − z41 e42 + z31
43 21
write out e44 − z11 e43 + z21 z31 − e42 z41 − e41
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 14
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 15
[9] J. C. Chou and M. Kamel, “Finding the Position and Orientation of [37] F. Thomas, “Approaching Dual Quaternions from Matrix Algebra,” IEEE
a Sensor on A Robot Manipulator Using Quaternions,” Int. J. Robot. Trans. Robot., vol. 30, no. 5, pp. 1037–1048, 2014.
Research, vol. 10, no. 3, pp. 240–254, 1991. [38] W. S. Massey, “Cross Products of Vectors in Higher Dimensional
[10] Y.-C. Lu and J. Chou, “Eight-space Quaternion Approach for Robotic Euclidean Spaces,” America. Math. Monthly, vol. 90, no. 10, pp. 697–
Hand-eye Calibration,” IEEE ICSMC 2002, pp. 3316–3321, 2002. 701, 1983.
[11] K. Daniilidis, “Hand-eye Calibration Using Dual Quaternions,” Int. J. [39] J. Wu, Z. Zhou, B. Gao, R. Li, Y. Cheng, and H. Fourati, “Fast Linear
Robot. Research, vol. 18, no. 3, pp. 286–298, 1999. Quaternion Attitude Estimator Using Vector Observations,” IEEE Trans.
[12] N. Andreff, R. Horaud, and B. Espiau, “Robot Hand-eye Calibration Auto. Sci. Eng., vol. 15, no. 1, pp. 307–319, 2018.
Using Structure-from-Motion,” Int. J. Robot. Research, vol. 20, no. 3, [40] J. Wu, M. Liu, Z. Zhou, and R. Li, “Fast Symbolic 3D Registration
pp. 228–248, 2001. Solution,” IEEE Trans. Auto. Sci. Eng., 2019. arxiv: 1805.08703
[13] D. Condurache and A. Burlacu, “Orthogonal Dual Tensor Method for [41] I. Y. Bar-Itzhack, “New Method for Extracting the Quaternion from a
Solving the AX = XB Sensor Calibration Problem,” Mech. Machine Rotation Matrix,” AIAA J. Guid. Control Dyn., vol. 23, no. 6, pp. 1085–
Theory, vol. 104, pp. 382–404, 2016. 1087, 2000.
[14] S. Gwak, J. Kim, and F. C. Park, “Numerical Optimization on the [42] A. H. J. D. Ruiter and J. Richard, “On the Solution of Wahba’s Problem
Euclidean Group with Applications to Camera Calibration,” IEEE Trans. on SO(n),” J. Astronautical Sci., no. December, pp. 734–763, 2014.
Robot. Autom., vol. 19, no. 1, pp. 65–74, 2003. [43] J. Wu, “Optimal Continuous Unit Quaternions from Rotation Matrices,”
[15] J. Heller, D. Henrion, and T. Pajdla, “Hand-eye and Robot-World AIAA J. Guid. Control Dyn., vol. 42, no. 4, pp. 919–922, 2019.
Calibration by Global Polynomial Optimization,” IEEE ICRA 2014, pp. [44] J. Wu, M. Liu, and Y. Qi, “Computationally Efficient Robust Algorithm
3157–3164, 2014. for Generalized Sensor Calibration Problem AR = RB,” IEEE Sensors
[16] Z. Zhao, “Simultaneous Robot-World and Hand-eye Calibration by the J., 2019. DOI: 10.13140/RG.2.2.17632.74240
Alternative Linear Programming,” Pattern Recogn. Lett., 2018. DOI: [45] P. Lourenço, B. J. Guerreiro, P. Batista, P. Oliveira, and C. Silvestre,
10.1016/j.patrec.2018.08.023 “Uncertainty Characterization of The Orthogonal Procrustes Problem
[17] Z. Zhang, L. Zhang, and G. Z. Yang, “A Computationally Efficient with Arbitrary Covariance Matrices,” Pattern Recogn., vol. 61, pp. 210–
Method for Hand-eye Calibration,” Int. J. Comput. Assist. Radio. Surg., 220, 2017.
vol. 12, no. 10, pp. 1775–1787, 2017. [46] R. L. Dailey, “Eigenvector Derivatives with Repeated Eigenvalues,”
[18] H. Song, Z. Du, W. Wang, and L. Sun, “Singularity Analysis for the AIAA J., vol. 27, no. 4, pp. 486–491, 1989.
Existing Closed-Form Solutions of the Hand-eye Calibration,” IEEE [47] G. Chang, T. Xu, and Q. Wang, “Error Analysis of Davenport’s q-
Access, vol. 6, pp. 75 407–75 421, 2018. method,” Automatica, vol. 75, pp. 217–220, 2017.
[19] H. Nguyen and Q.-C. Pham, “On the Covariance of X in AX = XB,” [48] X.-S. Gao, X.-R. Hou, J. Tang, and H.-f. Cheng, “Complete Solution
IEEE Trans. Robot., vol. 34, no. 6, pp. 1651–1658, 2018. Classification for the Perspective-Three-Point Problem,” IEEE Trans.
[20] Y. Tian, W. R. Hamel, and J. Tan, “Accurate Human Navigation Using Pattern Anal. Mach. Intell., vol. 25, no. 8, pp. 930–943, 2003.
Wearable Monocular Visual and Inertial Sensors,” IEEE Trans. Instrum. [49] T. Hamel and C. Samson, “Riccati Observers for the Nonstationary PnP
Meas., vol. 63, no. 1, pp. 203–213, 2014. Problem,” IEEE Trans. Autom. Control, vol. 63, no. 3, pp. 726–741,
[21] A. Alamri, M. Eid, R. Iglesias, S. Shirmohammadi, and A. E. Saddik, 2018.
“Haptic Virtual Rehabilitation Exercises for Post- stroke Diagnosis,” [50] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Key-
IEEE Trans. Instrum. Meas., vol. 57, no. 9, pp. 1–10, 2007. points,” Int. J. Comput. Vision, vol. 60, no. 2, pp. 91–110, 2004.
[22] S. Liwicki, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “Euler [51] R. Mahony, T. Hamel, and J. M. Pflimlin, “Nonlinear Complementary
Principal Component Analysis,” Int. J. Comput. Vision, vol. 101, no. 3, Filters on the Special Orthogonal Group,” IEEE Trans. Autom. Control,
pp. 498–518, 2013. vol. 53, no. 5, pp. 1203–1218, 2008.
[23] A. Bartoli, D. Pizarro, and M. Loog, “Stratified Generalized Procrustes [52] P. Payeur and C. Chen, “Registration of Range Measurements with
Analysis,” Int. J. Comput. Vision, vol. 101, no. 2, pp. 227–253, 2013. Compact Surface Representation,” IEEE Trans. Instrum. Meas., vol. 52,
[24] L. Igual, X. Perez-Sala, S. Escalera, C. Angulo, and F. De La Torre, no. 5, pp. 1627–1634, 2003.
“Continuous Generalized Procrustes analysis,” Pattern Recogn., vol. 47, [53] L. Wei, C. Cappelle, and Y. Ruichek, “Camera/Laser/GPS Fusion
no. 2, pp. 659–671, 2014. Method for Vehicle Positioning under Extended NIS-Based Sensor
[25] C. I. Mosier, “Determining A Simple Structure When Loadings for Validation,” IEEE Trans. Instrum. Meas., vol. 62, no. 11, pp. 3110–
Certain Tests Are Known,” Psychometrika, vol. 4, no. 2, pp. 149–162, 3122, 2013.
1939. [54] A. Wan, J. Xu, D. Miao, and K. Chen, “An Accurate Point-Based
[26] B. F. Green, “The Orthogonal Approximation of An Oblique Structure Rigid Registration Method for Laser Tracker Relocation,” IEEE Trans.
in Factor Analysis,” Psychometrika, vol. 17, no. 4, pp. 429–440, 1952. Instrum. Meas., vol. 66, no. 2, pp. 254–262, 2017.
[27] M. W. Browne, “On Oblique Procrustes Rotation,” Psychometrika, [55] Y. Zhuang, N. Jiang, H. Hu, and F. Yan, “3-D-Laser-Based Scene
vol. 32, no. 2, pp. 125–132, 1967. Measurement and Place Recognition for Mobile Robots in Dynamic
[28] G. Wahba, “A Least Squares Estimate of Satellite Attitude,” SIAM Rev., Indoor Environments,” IEEE Trans. Instrum. Meas., vol. 62, no. 2, pp.
vol. 7, no. 3, p. 409, 1965. 438–450, 2013.
[29] M. D. Shuster and S. D. Oh, “Three-axis Attitude Determination from [56] Z. Hu, Y. Li, N. Li, and B. Zhao, “Extrinsic Calibration of 2-D Laser
Vector Observations,” AIAA J. Guid. Control Dyn., vol. 4, no. 1, pp. Rangefinder and Camera From Single Shot Based on Minimal Solution,”
70–77, 1981. IEEE Trans. Instrum. Meas., vol. 65, no. 4, pp. 915–929, 2016.
[30] R. Horaud, F. Forbes, M. Yguel, G. Dewaele, and J. Zhang, “Rigid [57] S. Xie, D. Yang, K. Jiang, and Y. Zhong, “Pixels and 3-D Points
and Articulated Point Registration with Expectation Conditional Maxi- Alignment Method for the Fusion of Camera and LiDAR Data,” IEEE
mization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 3, pp. Trans. Instrum. Meas., 2018. DOI: 10.1109/TIM.2018.2879705
587–602, 2011. [58] Yuanxin Wu, Xiaoping Hu, Dewen Hu, Tao Li, and Junxiang Lian,
[31] N. Duta, A. K. Jain, and M. P. Dubuisson-Jolly, “Automatic Construction “Strapdown Inertial Navigation System Algorithms Based on Dual
of 2D Shape Models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, Quaternions,” IEEE Trans. Aerosp. Elec. Syst., vol. 41, no. 1, pp. 110–
no. 5, pp. 433–446, 2001. 132, 2005.
[32] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-Squares Fitting [59] N. Enayati, E. D. Momi, and G. Ferrigno, “A Quaternion-Based Un-
of Two 3-D Point Sets,” IEEE Trans. Pattern Anal. Mach. Intell., vol. scented Kalman Filter for Robust Optical/Inertial Motion Tracking in
PAMI-9, no. 5, pp. 698–700, 1987. Computer-Assisted Surgery,” IEEE Trans. Instrum. Meas., vol. 64, no. 8,
[33] P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” pp. 2291–2301, 2015.
IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239–256, [60] J. Zhang, M. Kaess, and S. Singh, “A Real-time Method for Depth
1992. Enhanced Visual Odometry,” Auto. Robots, vol. 41, no. 1, pp. 31–43,
[34] M. Wang and A. Tayebi, “Hybrid Pose and Velocity-bias Estimation on 2017.
SE(3) Using Inertial and Landmark Measurements,” IEEE Trans. Autom.
Control, 2018. DOI: 10.1109/TAC.2018.2879766
[35] J. Markdahl and X. Hu, “Exact Solutions to A Class of Feedback
Systems on SO(n),” Automatica, vol. 63, pp. 138–147, 2016.
[36] J. D. Biggs and H. Henninger, “Motion Planning on a Class of 6-D Lie
Groups via A Covering Map,” IEEE Trans. Autom. Control, 2018. DOI:
10.1109/TAC.2018.2885241
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 16
Jin Wu was born in May, 1994 in Zhenjiang, Ming Liu received the B.A. degree in automa-
China. He received the B.S. Degree from University tion from Tongji University, Shanghai, China, in
of Electronic Science and Technology of China, 2005, and the Ph.D. degree from the Department of
Chengdu, China. He has been a research assistant in Mechanical and Process Engineering, ETH Zürich,
Department of Electronic and Computer Engineer- Zürich, Switzerland, in 2013, supervised by Prof.
ing, Hong Kong University of Science and Tech- Roland Siegwart. During his master’s study with
nology since 2018. His research interests include Tongji University, he stayed one year with the
robot navigation, multi-sensor fusion, automatic con- Erlangen-Nünberg University and Fraunhofer Insti-
trol and mechatronics. He is a co-author of over tute IISB, Erlangen, Germany, as a Master Visiting
30 technical papers in representative journals and Scholar.
conference proceedings of IEEE, AIAA, IET and He is currently with the Electronic and Computer
etc. Mr. Jin Wu received the outstanding reviewer award for A SIAN J OURNAL Engineering, Computer Science and Engineering Department, Robotics Insti-
OF C ONTROL . One of his papers published in IEEE T RANSACTIONS ON tute, The Hong Kong University of Science and Technology, Hong Kong, as
AUTOMATION S CIENCE AND E NGINEERING was selected as the ESI Highly an Assistant Professor. He is also a founding member of Shanghai Swing
Cited Paper by ISI Web of Science during 2017 to 2018. He is a member of Automation Ltd., Co. He is coordinating and involved in NSF Projects and
IEEE. National 863-Hi-TechPlan Projects in China. His research interests include
dynamic environment modeling, deep-learning for robotics, 3-D mapping,
machine learning, and visual control.
Dr. Liu was a recipient of the European Micro Aerial Vehicle Competition
(EMAV’09) (second place) and two awards from International Aerial Robot
Competition (IARC’14) as a Team Member, the Best Student Paper Award
as first author for MFI 2012 (IEEE International Conference on Multisensor
Fusion and Information Integration), the Best Paper Award in Information
for IEEE International Conference on Information and Automation (ICIA
2013) as first author, the Best Paper Award Finalists as co-author, the Best
RoboCup Paper Award for IROS 2013 (IEEE/RSJ International Conference
on Intelligent Robots and Systems), the Best Conference Paper Award for
IEEE-CYBER 2015, the Best Student Paper Finalist for RCAR 2015 (IEEE
International conference on Real-time Computing and Robotics), the Best
Student Paper Finalist for ROBIO 2015, the Best Student Paper Award for
IEEE-ICAR 2017, the Best Paper in Automation Award for IEEE-ICIA 2017,
twice the innoviation contest Chunhui Cup Winning Award in 2012 and
Yuxiang Sun received the bachelor’s degree from 2013, and the Wu Wenjun AI Award in 2016. He was the Program Chair of
the Hefei University of Technology, Hefei, China, IEEERCAR 2016 and the Program Chair of International Robotics Conference
in 2009, the master’s degree from the University of in Foshan 2017. He was the Conference Chair of ICVS 2017. He has published
Science and Technology of China, Hefei, in 2012, many popular papers in top robotics journals including IEEE T RANSACTIONS
and the Ph.D. degree from The Chinese University of ON ROBOTICS , I NTERNATIONAL J OURNAL OF ROBOTICS R ESEARCH and
Hong Kong, Hong Kong, in 2017. He is currently a IEEE T RANSACTIONS ON AUTOMATION S CIENCE AND E NGINEERING. Dr.
Research Associate with the Robotics Institute, De- Liu is currently an Associate Editor for IEEE ROBOTICS AND AUTOMATION
partment of Electronic and Computer Engineering, L ETTERS. He is a Senior Member of IEEE.
The Hong Kong University of Science and Tech-
nology, Hong Kong. His current research interests
include mobile robots, autonomous vehicles, deep
learning, SLAM and navigation, motion detection, and so on.
Dr. Sun is a recipient of the Best Student Paper Finalist Award at the IEEE
ROBIO 2015.
0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.