Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
135 views16 pages

Wu 2019

This article proposes a new method for solving the hand-eye calibration problem of determining the transformation between a robotic gripper and camera (AX=XB) using 4D Procrustes analysis. Prior methods either estimated the rotation and translation separately or simultaneously, but the new method provides an optimal closed-form solution using unit-octonion representation. It was tested on simulations and a real robotic arm, demonstrating better accuracy and computation time than existing approaches. The new method also provides a simpler way to describe the uncertainty of the solution compared to previous work.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views16 pages

Wu 2019

This article proposes a new method for solving the hand-eye calibration problem of determining the transformation between a robotic gripper and camera (AX=XB) using 4D Procrustes analysis. Prior methods either estimated the rotation and translation separately or simultaneously, but the new method provides an optimal closed-form solution using unit-octonion representation. It was tested on simulations and a real robotic arm, demonstrating better accuracy and computation time than existing approaches. The new method also provides a simpler way to describe the uncertainty of the solution compared to previous work.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 1

Hand-eye Calibration: 4D Procrustes Analysis


Approach
Jin Wu , Member, IEEE, Yuxiang Sun , Miaomiao Wang , Student Member, IEEE and Ming Liu
, Senior Member, IEEE

Abstract—We give an universal analytical solution to the hand- gripper-camera transformation for accurate robotic perception
eye calibration problem AX = XB with known matrices A, B and reconstruction. During the past over 30 years, there
and unknown variable X, all in the set of special Euclidean have been a large variety of algorithms solving the hand-
group SE(3). The developed method relies on the 4-dimensional
Procrustes analysis. An unit-octonion representation is proposed eye problem AX = XB. Generally speaking, they can be
for the first time to solve such Procrustes problem through which categorized into two groups. The first group consists of those
an optimal closed-form eigen-decomposition solution is derived. algorithms that calculate the rotation in the first step and then
By virtue of such solution, the uncertainty description of X, being compute the translation part in the second step while in the
a sophisticated problem previously, can be solved in a simpler second group, algorithms compute the rotation and translation
manner. The proposed approach is then verified using simula-
tions and real-world experimentations on an industrial robotic simultaneously. There are quite a lot of methods belonging
arm. The results indicate that it owns better accuracy, better to the very first group that we call them as separated ones
description of uncertainty and consumes much less computation including representatives of rotation-logarithm based ones like
time. Tsai et al. [5], Shiu et al. [6], Park et al. [7], Horaud et
Index Terms—Hand-eye Calibration, Homogenous Transfor- al. [8] and quaternion based one from Chou et al. [9]. The
mation, Least Squares, Quaternions, Octonions simultaneous ones appear in the second group with related
representatives of
I. I NTRODUCTION 1) Analytical solutions: Quaternion-based method by Lu
et al. [10], Dual-quaternion based one by Daniilidis [11],
T HE main hand-eye calibration problem studied in this
paper is aimed to compute the unknown relative homoge-
neous transformation X between robotic gripper and attached
Sylvester-equation based one by Andreff et al. [12], Dual-
tensor based one by Condurache et al. [13].
2) Numerical solutions: Gradient/Newton methods by
camera, whose poses are denoted as A and B respectively
Gwak et al. [14], Linear-matrix-inequality (LMI) based
such that AX = XB. Hand-eye calibration can be solved
one by Heller et al. [15], Alternative-linear-programming
via general solutions to the AX = XB problems or through
based one by Zhao [16], pseudo-inverse based one by
minimizing direct models established using reprojection errors
Zhang et al. [3], [17].
[1]. However, the hand-eye problem AX = XB is not
restricted only to the manipulator-camera calibration. Rather, Each kind of algorithms have their own pros and cons. The
it has been applied to multiple sensor calibration problems separated ones can not produce good enough results with those
including magnetic/inertial ones [2], camera/magnetic ones [3] cases when translation measurements are more accurate than
and other general models [4]. That is to say, the solution of rotation. The simultaneous ones can achieve better optimiza-
AX = XB is more generalized and has broader applications tion performances but may consume large quantity of time
than methods based on reprojection-error minimization. The when using numerical iterations. Some algorithms will also
early study of the hand-eye calibration problem can be dated suffer from their own ill-posed conditions in the presence
back to 1980s when some researchers try to determine the some extreme datasets [18]. What’s more, the uncertainty
description of the X in hand-eye problem AX = XB,
Manuscript received April 24, 2019; revised June 3, 2019 and being an important but difficult problem, has always troubled
June 18, 2019; accepted July 11, 2019. This research was supported
by Shenzhen Science, Technology and Innovation Comission (SZSTI) researchers until the first public general iterative solution by
JCYJ20160401100022706, in part by National Natural Science Foundation Nguyen et al. in 2018 [19]. An intuitive overview of these
of China under the grants of No. U1713211 and 41604025. The Associate algorithms in the order of publication time can be found out
Editor coordinating the review process was XXX. (Corresponding author:
Ming Liu) in Table I.
J. Wu, Y. Sun and M. Liu are with Department of Electronic and Computer Till now, hand-eye calibration has accelerated the develop-
Engineering, Hong Kong University of Science and Technology, Hong Kong, ment of robotics communities according to it various usages
China. (e-mail: jin wu [email protected]; [email protected]; eel-
[email protected]). in sensor calibration and motion sensing [20], [21]. Although
M. Wang is with Department of Electronic and Computer Engineering, it has been quite a long time since the first proposal of
Western University, London, Ontario, Canada. (e-mail: [email protected]). hand-eye calibration, the researches around it are still very
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. popular. There is still a remaining problem that no algorithm
Digital Object Identifier XXX can simultaneously estimate the X in AX = XB while

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 2

TABLE I
C OMPARISONS BETWEEN R ELATED M ETHODS

Methods Type Parameterization or Basic Tools Computation Speed Accuracy Has Uncertainty Description?
Tsai et al. 1989 [5] Separated, Analytical Rotation Logarithms High Medium No
Shiu et al. 1989 [6] Separated, Analytical Rotation Logarithms Low Low No
Park et al. 1994 [7] Separated, Analytical Rotation Logarithms, SVD Medium Medium No
Horaud 1995 [8] Separated, Analytical Rotation Logarithms, Eigen-decomposition High Medium No
Chou et al. 1991 [9] Separated, Analytical Quaternion, SVD High Medium No
Daniilidis 1999 [11] Simultaneous, Analytical Dual Quaternion, SVD Medium Medium No
Andreff et al. 2001 [12] Simultaneous, Analytical Sylverster Equation, Kronecker Product High Medium No
Lu et al. 2002 [10] Simultaneous, Analytical Quaternion, SVD Low Medium No
Gwak et al. 2003 [14] Simultaneous, Optimization Gradient/Newton Method Very Low High No
Heller et al. 2014 [15] Simultaneous, Optimization Quaternion, Dual Quaternion, LMI Very Low High No
Condurache et al. 2016 [13] Simultaneous, Analytical Dual Tensor, SVD or QR Decomposition Medium Medium No
Zhang et al. 2017 [17] Simultaneous, Optimization Dual Quaternion, Pseudo Inverse Medium Medium No
Zhao 2018 [16] Simultaneous, Optimization Dual Quaternion, Alternative Linear Programming Very Low High No
Nguyen et al. 2018 [19] Separated, Optimization Rotation Iteration Very Low High Yes

preserving highly accurate uncertainty descriptions and con- II. P ROBLEM F ORMULATION
suming extremely low computation time. These difficulties are We start this section by first defining some important
rather practical since in the hand-eye problem AX = XB, notations in this paper that are mostly inherited from [34].
the rotation and translation parts are tightly coupled with high The n-dimensional real Euclidean space is represented by Rn
nonlinearity, which motivates Nguyen et al. to derive the first- which further generates the matrix space Rm×n containing all
order approximation of the error covariance propagation. It real matrices with m rows and n columns. All n-dimensional
is also the presented nonlinearity that makes the numerical rotation matrices belong to the special orthogonal group
iterations much slower. SO(n) := {R ∈ Rn×n |RT R = I, det(R) = 1} where
To overcome the current algorithmic shortcomings, in this I denotes the identity matrix with proper size. The special
paper, we study a new 4-dimensional (4D) Procrustes anal- Euclidean space is composed of a rotation matrix R and a
ysis tool for representation of homogeneous transformations. translational vector t such that
Understanding the manifolds has become a popular way for    
modern interior analysis of various data flows [22]. The R t
SE(n) := T = |R ∈ SO(n), t ∈ Rn (1)
geometric descriptions of these manifolds have always been 0 1
vital, which, are usually addressed with the Procrustes analysis with 0 denoting the zeros matrix with adequate dimensions.
that extracts the rigid, affine or non-rigid geometric mappings The Euclidean norm of p a given squared matrix X will be
between datasets [23], [24]. Early researches on Procrustes defined with kXk = tr (X T X) where tr denotes the
analysis have been conducted since 1930s [25], [26], [27] and matrix trace. The vectorization of an arbitrary matrix X is
later generalized solutions are applied to spacecraft attitude defined as vec(X) and ⊗ represents the kronecker product
determination [28], [29], image registration [30], [31], laser between two matrices. For a given arbitrary matrix X, X † is
scan matching using iterative closest points (ICP) [32], [33] called its Moore-Penrose generalized inverse. Any rotation R
and etc. Motivated by these technological advances, this paper on the SO(3) has its corresponding logarithm given by
has the following contributions:
φ
log(R) = (R − RT ) (2)
2 sin φ
1) We show some analytical results to the 4D Procrustes
in which 1 + 2 cos φ = tr(R). Given a 3D vector x =
analysis in Section III and apply them to the solution of
(x1 , x2 , x3 )T , its associated skew-symmetric matrix is
hand-eye calibration problem detailed in Section II.
 
2) Since all variables are directly propagated into final 0 −x3 x2
results, the solving process is quite simple and computa- [x]× =  x3 0 −x1  (3)
tionally efficient. −x2 x1 0
3) Also, as the proposed solution is in the form of the
spectrum decomposition of a 4×4 matrix, the closed-form satisfying x × y = [x]× y = −[y]× x where y is also an
probabilistic information is given precisely and flexibly arbitrary 3D vector. The inverse map from the skew-symmetric
for the first time using some recent results in automatic matrix to the 3D vector is denoted as [x]∧
× = x.
control. Now let us describe the main problem in this paper. Given
two measurement sets
A = {Ai |Ai ∈ SE(3), i = 1, 2, · · · , N }
Finally, via simulations and real-world robotic experiments (4)
B = {Bi |Bi ∈ SE(3), i = 1, 2, · · · , N }
in Section IV, the proposed method is evaluated to own
better potential accuracy, computational loads and uncertainty consider the hand-eye calibration least square:
descriptions. Detailed comparisons are also shown to reveal N
the sensitivity of the proposed method subject to input noise X 2
arg min J = kAi X − XBi k (5)
and different parameter values. X∈SE(3) i=1

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 3

where Ai and Bi come to the reality using poses in two Note that (10) is in fact a rigid 3D registration problem which
successive measurements (also see Fig. 1 in [11]) can be solved instantly with singular value decomposition
−1 (SVD) or eigen-decomposition [29], [32]. However, the so-
Ai = TAi+1 TA
i
(6) lution of Park et al. does not take the translation into account
−1
Bi = T B T
i+1 Bi
for RX while the accuracy of RX is actually affected by tX .
Therefore, there are some other methods trying to compute
with TAi being the i-th camera pose with respect to the RX and tX simultaneously [11], [12]. While these methods
standard objects in world frame and fix the remaining problem of Park et al., they may not achieve
TBi = TBi,3 TBi,2 TBi,1 (7) global minimum as the optimization
N 2
!
are gripper poses with respect to the robotic base, in which
X kRX ai − bi k +
arg min 2
TBi,1 , TBi,2 , TBi,3 are transformations between joints of RX ∈SO(3),tX ∈R3 i=1 kRX tB + tX − RA tX − tA k
robotic arms. The relationship between these homogeneous (12)
transformations can be found out in Fig. 1. The task for us in is not always convex. Hence, iterative numerical methods are
the remainder of this paper is to give a closed-form solution proposed to achieve the globally optimal estimates of RX and
of X considering rotation and translation simultaneously and tX , including solvers in [3], [14], [16], [17]. In the following
moreover, derive the uncertainty description of X. section, we show a new analytical perspective for hand-eye
Let us write A, B into calibration problem AX = XB using the proposed 4D
    Procrustes analysis.
RA tA RB tB
A= ,B = (8)
0 1 0 1
III. 4D P ROCRUSTES A NALYSIS
Then one easily obtains The whole results in this section are proposed for the first
time solving specific 4D Procrustes analysis problems. The

RA RX = RX RB
(9) developed approach is therefore named as the 4DPA method
RA tX + tA = RX tB + tX
for simplicity in later sections.
The method by Park et al. [7] first computes RX from the
first equation of (9) and then solves tX by inserting RX into
A. Some New Analytical Results
the second sub-equation. The Park’s step for computing RX
is tantamount to the following optimization Problem 1: Let {U} = {ui ∈ R4 }, {V} = {vi ∈ R4 }
where i = 1, 2, · · · , N, N ≥ 3 be two point sets in which the
N
X 2 correspondences are well matched such that ui corresponds
arg min kRX ai − bi k (10)
RX ∈SO(3) i=1 exactly to vi . Find the 4D rotation R and translation vector
t such that
with N
ai = [log (RAi )]∧ X 2
(11) arg min kRui + t − vi k (13)
bi = [log (RBi )]∧ R∈SO(4),t∈R4 i=1

Fig. 1. The relationship between various homogeneous transformations for gripper-camera hand-eye calibration.

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 4

 
H11 + H22 + H33 + H44 H12 − H21 − H34 + H43 H13 + H24 − H31 − H42 H14 − H23 + H32 − H41
 H12 − H21 + H34 − H43 H33 − H22 − H11 + H44 H14 − H23 − H32 + H41 −H13 − H24 − H31 − H42 
K =  H −H −H +H −H14 − H23 − H32 − H41 H22 − H11 − H33 + H44 H12 + H21 − H34 − H43 
(19)
13 24 31 42
H14 + H23 − H32 − H41 H13 − H24 + H31 − H42 −H12 − H21 − H34 − H43 H22 − H11 + H33 − H44

columns of R, respectively. For each column, the algebraic


Solution: Problem 1 is actually a 4D registration problem factorization can be performed via
that can be easily solved via the SVD: c1 = P1 (σ) σ
c2 = P2 (σ) σ
R = U diag [1, 1, 1, det(U V )] V T (20)
(14) c3 = P3 (σ) σ
t = v̄ − Rū c4 = P4 (σ) σ

with where P1 (σ) , P2 (σ) , P3 (σ) , P4 (σ) ∈ R4×8 are given in


U SV T = H the Appendix A. These matrices, however, are subjected to the
N following equalities
X T
H= (ui − ū) (vi − v̄) P1 (σ) P1T (σ) = P2 (σ) P2T (σ)
i=1 (15)
= P3 (σ) P3T (σ) = P4 (σ) P4T (σ)
N N
1 X 1 X 1 2 (21)
a + b 2 + c2 + d 2 + p 2 + q 2 + r 2 + s 2 I

ū = ui , v̄ = vi =
N i=1 N i=1 2
=I
where S = diag(s1 , s2 , s3 , s4 ) is the diagonal matrix con- Then following the step of [39], one can obtain that ideally
taining all singular values of H. However, SVD can not
reflect the interior geometry of SO(4) and such geometric P1T (σ)H1 +P2T (σ)H2 +P3T (σ)H3 +P4T (σ)H4 = σ (22)
information of special orthogonal groups will be very helpful where H1 , H2 , H3 , H4 are four rows of the matrix H, such
for further proofs [35], [36]. The 4D rotation can be charac- that
terized with two unitary quaternions qL = (a, b, c, d)T and H = H1T , H2T , H3T , H4T
T
(23)
qR = (p, q, r, s)T by [37]
Evaluating the left part of (22), an eigenvalue problem is
R = RL (qL ) RR (qR ) (16) derived to [39]
W σ = λW ,max σ (24)
with
The optimal eigenvector σ is associated with the maximum
a −b −c −d
 
b a −d c eigenvalue λW ,max of W with W being an 8 × 8 matrix in
RL (qL ) = 
 
c d a −b  the form of  
d −c b a 0 K
(17) W = (25)

p −q −r −s
 KT 0
q p s −r where K is shown in (19). This indicates that λW ,max subject
RR (qR ) = 
 
r −s p q 
s r −q p to

det λW ,max I − W
being the left and right matrices. Interestingly, such RL (qL ) !
1
KT K

and RR (qR ) are actually matrix expressions for quaternion = det λW ,max I det λW ,max I − (26)
products from left and right sides respectively. Rising from 3D λW ,max
= det λ2W ,max I − K T K

spaces, the 4D rotation is much more sophisticated because
the 4D cross product is not so unique as that in the 3D
case [38]. Therefore, methods previously relying on the 3D where the details are shown in Appendix A. In other words,
skew symmetric matrices are no longer extendable for 4D λ2W ,max is the eigenvalue of the 4 × 4 matrix K T K. As
registration. The parameterization of R ∈ SO(4) by (16) can the symbolic solutions to generalized quartic equations have
also be unified with the unit octonion given by been detailed in [40], the computation of the eigenvalues of
W will be very simple. When σ is computed, it also gives
1 T T T
the R and thus will produce t according to (14).
σ = √ (qL , qR ) ∈ R8 (18)
2
Sub-Problem 1: Given an improper rotation matrix R̃ which
Our task here is to derive the closed-form solution of such is not strictly on SO(4), find the optimal rotation R ∈ SO(4)
a σ and therefore compute R and t. Using the analytical to orthonormalize R̃.
form in (16), we can rewrite the rotation matrix R into Solution: This is the orthonormalization problem and can be
R = (c1 , c2 , c3 , c4 ) with c1 , c2 , c3 , c4 standing for the four solved by replacing H as R in (15), indicated in [41], [42]

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 5

and [43]. where


N X
4 X
4
Problem 2: Let {E} = {Ei ∈ SO(4)}, {Z} = {Zi ∈ SO(4)}
X
F = G2jk,i
where i = 1, 2, · · · , N be two matrix sets in which Ei i=1 j=1 k=1
corresponds exactly to Zi . Find the 4D rotation R such that N X
4 X
4 
(33)
T

X Jjk,i Jjk,i 0
N = T
X 2 0 Jjk,i Jjk,i
arg min kEi R − RZi k (27) i=1 j=1 k=1
R∈SO(4) i=1
(32) indicates that σ is the eigenvector belonging to the
Solution: First we provide some properties on this problem. minimum eigenvalue of F such that
Problem 2 is very different with (10) because for rotation on
F σ = λF ,min σ (34)
SO(4), the exponential map indicates a 6 × 6 skew-symmetric
matrix. Therefore the previous 3D registration method can not T
Since Jjk,i Jjk,i T
and Jjk,i Jjk,i have quite the same spectrum
be extended to the 4D case. In the solution to Problem 1, we distribution, (33) also implies that there are two different
reveal some identities of unit octonion for representation of minimum eigenvalues for F with their associated eigenvectors
rotation on SO(4). Note that this is very similar to the previous representing qL and qR respectively. That is to say, the qL and
quaternion decomposition from rotation (QDR) that been used qR are eigenvectors of F11 and F22 such that
for solving AR = RB where A, B, R ∈ SO(3) [44]. Then
N X
4 X
4
we are going to extend the QDR to the octonion decomposition X
T
F11 = Jjk,i Jjk,i
from rotation (ODR) for the solution of Problem 2.
i=1 j=1 k=1
Like (20), R ∈ SO(4) can also be decomposed from rows
N X
4 X
4
such that X
T (35)
 T F22 = Jjk,i Jjk,i
R = r1T , r2T , r3T , r4T
i=1 j=1 k=1
r1 = σ T Q1 (σ)
 
F11 0
(28) F =
r2 = σ T Q2 (σ) 0 F22
T
r3 = σ Q3 (σ) associated with their minimum eigenvalues, respectively.
r4 = σ T Q4 (σ) Then inserting the computed qL and qR into (16) gives the
where Q1 (σ), Q2 (σ), Q3 (σ), Q4 (σ) ∈ R4×8 are shown in optimal R ∈ SO(4) for Problem 2.
the Appendix A.
Invoking this ODR, we are able to transform Ei R − RZi Sub-Problem 2: Let {E} = {Ei ∈ SO(4)}, {Z} = {Zi ∈
into SO(4)} where i = 1, 2, · · · , N , be two sequential matrix sets
in which Ei does not exactly correspond to Zi . Find the 4D
Ei R − RZi = (Mi,1 σ, Mi,2 σ, Mi,3 σ, Mi,4 σ) (29) rotation R
where i = 1, 2, · · · , N and N
X 2
arg min kEi R − RZi k (36)
σ T G11,i σ T G21,i
   
R∈SO(4) i=1
 σ T G12,i   σ T G22,i 
Mi,1 =
 σ T G13,i  , Mi,2 =  σ T G23,i
  ,
 provided that {E} and {Z} are sampled asynchronously.
T
σ G14,i σ T G24,i
(30)
Solution: In this problem, {E} and {Z} are asynchronously
 T   T 
σ G31,i σ G41,i
 σ T G32,i   σ T G42,i  sampled measurements with different timestamps. First we
Mi,3 = T
  , Mi,4 =  T
 
σ G33,i  σ G43,i  need to interpolate the rotations for smooth consensus. Sup-
T
σ G34,i σ T G44,i pose that we have two successive homogeneous transforma-
in which each Gjk,i , j, k = 1, 2, 3, 4 takes the form tions Ei , Ei+1 with timestamps of τE,i , τE,i+1 respectively.
  There exists a measurement of Zi with timestamp of τZ,i ∈
0 Jjk,i [τE,i , τE,i+1 ]. Then the linear interpolation Ei,i+1 on SO(4)
Gjk,i = T (31)
Jjk,i 0 can be found out by
with parameter matrices Jjk,i ∈ R4×8 given in Appendix A.  
T
T 
T
 
w Ei Ei,i+1 −I Ei Ei,i+1 −I +
Afterwards, the optimal octonion can be sought by arg min tr 
 
 T  
Ei,i+1 ∈SO(4) T T
N
X (1 − w) Ei,i+1 Ei+1 − I Ei,i+1 Ei+1 − I
2
arg min kEi R − RZi k n o
R∈SO(4) i=1 ⇒ arg min tr Ei,i+1 [wEi + (1 − w) Ei+1 ]T
  Ei,i+1 ∈SO(4)
N
X 4 X
X 4
(32) (37)
= arg min σT  G2jk,i  σ where w is the timestamp weight between Ei and Ei,i+1 such
σ T σ=1 i=1 j=1 k=1 that
T τE,i+1 − τZ,i
= arg min σ F σ w= (38)
σ T σ=1 τE,i+1 − τE,i

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 6

T
∂ (Jjk,i Jjk,i )
Then the interpolation can be solved using solution to the ∂b can be intuitively conducted using analytical
T
Problem 1 by letting H = [wEi + (1 − w) Ei+1 ] . After the forms of matrices in the Appendix A and this part of work
interpolation, a new interpolated set {Ẽ} can be established is left for the audience of this paper. The covariance of qR
that well corresponds to {Z} and R can be solved via the can therefore be computed by replacing F11 with F22 in (45).
solution to Problem 2. The cross-covariance between qL and qR can also be given
as follows
T


B. Uncertainty Descriptions ΣδqL δqR = δqL δqR
In this sub-section, we use x̂ for representing the true value qL ⊗ (F11 − λF11 ,min I)† δmL
* T  +
of the noise-disturbed vector x. The expectation is denoted = T
δmTR qR ⊗ (F22 , λF22 ,min I)†
 T
using h· · · i [29], [45]. All the errors in this sub-section are (46)

 T 
assumed to be zero-mean which can be found out in [29]. = qL ⊗ (F11 − λF11 ,min I) ΣδmL δmR
As all the solutions provided in the last sub-section are all T
qR ⊗ (F22 , λF22 ,min I)†
 T
in the spectrum-decomposition form, the errors of the derived
quaternions can be given by the perturbation theory [46]. In a in which mL = vec(F11 ), mR = vec(F22 ) and ΣδmL δmR is
recent error analysis for the attitude determination from vector given by
observations by Chang et al. [47], the first-order error of the ∂mL

∂mR
T
estimated deterministic quaternion q is ΣδmL δmR = Σb (47)
∂b ∂b
δq = q T ⊗ (λmax I − M )† δm
 
(39) Eventually, the covariance of the octonion σ will be
 
provided that m = vec(M ), λmax being the maximum ΣδqL ΣδqL δqR
Σσ = (48)
eigenvalue of the real symmetric matrix M ΣδqR δqL ΣδqR
M q = λmax q (40)
C. Solving AX = XB from SO(4) Perspective
The above quaternion error is presented given the assumption The SO(4) parameterization of SE(3) is presented by
that δq is multiplicative such that Thomas in [37] that
   
q̂ = δq q (41) R t F1 R εt
T = ←→ RT ,SO(4) = (49)
0 1 F1−1 εtT R 1
where denotes the quaternion product. The following con-
tents will discuss the covariance expressions for this type of in which ε denotes the dual unit that enables ε2 = 0. The
quaternion error. right part of (49) is on SO(4) and a practical method for
Using (39), we have the following quaternion error covari- approaching the corresponding homogeneous transformation
ance is that we can choose very tiny numbers for ε = 1/d where
d  1 is a positive scaling factor, to generate real-number
D E
Σδq = δqδq T
h i h iT 
approximation of RT ,SO(4) :
T † T T †
= q ⊗ (λmax I − M ) δmδm q ⊗ (λmax I − M ) 1
 
R t
RT ,SO(4) ≈ 1 T
d (50)
dt R 1
h i h iT
= q T ⊗ (λmax I − M )† Σδm q T ⊗ (λmax I − M )†
(42) It is also noted that the mapping in (49) is not
in which unique. For instance, the following mapping also holds for
∂m

∂m
T RTT ,SO(4) RT ,SO(4) = RT ,SO(4) RTT ,SO(4) = I when d  1:
Σδm = Σb (43)
∂b ∂b
   
R t F2 R εt
T = ←→ RT ,SO(4) = (51)
where b denotes all input variables contributed to the final 0 1 F2−1 −εtT R 1
form of M . Let us take the solution to Problem 2 for example.
The convenience of such mapping from SE(3) to SO(4) is
For qL , we have
that some nonlinear equations on SE(3) can be turned to linear
F11 qL = λF ,min qL = λF11 ,min qL (44) ones on SO(4). Choosing a scaling factor d makes an approx-
imation of homogeneous transformation on SO(4). Then the
which yields that conventional hand-eye calibration problem AX = XB can
M = −F11 , λmax = −λF11 ,min be shifted to
N
m = −vec(F11 ) X 2
  arg min J = kAi X − XBi k
N X4 X4 ∂vec J T
∂m 1 X jk,i Jjk,i (45) X∈SE(3) i=1
=− ⇒ arg min J =
∂b N i=1 j=1 ∂b
k=1 RX,SO(4) ∈SO(4)
T
b = vec(Ei )T , vec(Zi )T

N
RA ,SO(4) RX,SO(4) − RX,SO(4) RB ,SO(4) 2
X
i i
where we assume that b for every pair of {Ei , Zi } have i=1
the same probabilistic distribution. The computation of (52)

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 7

which can be instantly solved via the solution to Problem D. Discussion


2. With asynchronously sampled measurements, the problem The presented SO(4) algorithm for hand-eye calibration has
can be refined with solution to the Sub-Problem 2. While the the following advantages:
uncertainty descriptions of σ related to RX,SO(4) are shown
1) It can simultaneously solve rotation and translation in X
in last sub-section, we would reveal what the covariances of
for hand-eye calibration problem AX = XB and thus
the rotation and translation look like. Suppose that now we
own comparable accuracy and robustness with previous
have obtained the covariance of σ in (48) for RX,SO(4) . The
representatives.
covariances between columns of the rotation matrix and the
2) All the items from A and B are directly propagated
cross-covariances between columns of rotation and translation
to the final forms of eigen-decomposition without any
are considered. Recalling (20), one finds out that the covari-
preprocessing techniques e.g. quaternion conversion from
ance between i-th column and j-th column of RX,SO(4) is
rotation, rotation logarithm remaining in previous litera-
calculated by
tures.
Σδci δcj 3) According to the direct propagation of variables to the
D E
= [Pi (δσ) σ + Pi (σ) δσ] [Pj (δσ) σ + Pj (σ) δσ]
T final result, the computation speed is extremely fast.
* + 4) The uncertainty descriptions can be obtained easily with
Pi (δσ) σσ T PjT (δσ) + Pi (σ) δσδσ T PjT (σ) + the given analytical results.
=
Pi (δσ) σδσ T PjT (σ) + Pi (σ) δσσ T PjT (δσ) However, the proposed method also owns its drawback, that
* + is, the accuracy of the final computed X is actually affected
Yi (σ) δσδσ T YjT (σ) + Pi (σ) δσδσ T PjT (σ) +
= by the scaling factor d. Here one can find out that d is
Yi (σ) δσδσ T PjT (σ) + Pi (σ) δσδσ T YjT (σ) actually a factor that scales the translation part to a small
= Yi (σ) Σσ YjT (σ) + Pi (σ) Σσ PjT (σ) + vector. However, this does not mean that larger d will lead
to better performance, since very large d may reduce the
Yi (σ) Σσ PjT (σ) + Pi (σ) Σσ YjT (σ)
(53) significant digits of a fixed word-length floating point number.
where Pi (δσ) σ = Yi (σ) δσ and Yi (σ) is a linear mapping Therefore, d can be empirically determined according to the
of σ which can be evaluated by symbolic computations. For scale of translation vector and required accuracy of floating-
the current 4D Procrustes analysis, interestingly, we have number processing. For instance, for a 32-bit computer, one
single-precision floating point number requires 4 bytes for
Yi (σ) = Pi (σ) (54) storage, then a d = 1 × 105 ∼ 1 × 106 ≈ 216 ∼ 220
Therefore (53) can be interpreted as will be redundant enough guaranteeing the accuracy of at
least 220−32 m = 2−12 m = 2.44 × 10−04 m, which is
Σδci δcj = 4Pi (σ) Σσ PjT (σ) (55) enough for most optical systems with measurement ranges of
In particular, the rotation-translation cross-covariances will be 10 m. How to choose the most appropriate d dynamically
described by taking the first 3 rows and 3 columns of covari- and optimally will be a difficult but significant task in later
ance matrices of d·Σδc1 δc4 , d·Σδc2 δc4 , d·Σδc3 δc4 respectively. works. The algorithmic procedures of the proposed method
More specifically, if we need to obtain the covariance of are described in Algorithm 1 for intuitive implementation.
RX,SO(4) , one arrives at Engineers can also turn to the links in the Acknowledgement
D E for some MATLAB codes.
T
ΣRX,SO(4) = δRX,SO(4) δRX,SO(4)
4
Algorithm 1 The Proposed 4DPA Method for Hand-eye
X T Calibration
= [Yi (σ) + Pi (σ)] Σσ [Yi (σ) + Pi (σ)]
(56) Parameter: Empirical value of d.
i=1
4 Require:
X
=4 Pi (σ)Σσ PiT (σ) 1) Get N measurements of Ai , Bi in {A} and {B} respec-
i=1 tively. If they are not synchronously measured, get the
where most appropriate interpolated sets using solution to Sub-

δP1 (σ) σ + P1 (σ) δσ, δP2 (σ) σ + P2 (σ) δσ,
 Problem 2.
δRX,SO(4) = 2) Select a scaling factor d empirically for SE(3) − SO(4)
δP3 (σ) σ + P3 (σ) δσ, δP4 (σ) σ + P4 (σ) δσ
  mapping.
[Y1 (σ) + P1 (σ)] δσ, [Y2 (σ) + P2 (σ)] δσ,
= Step 1: Convert measurements in {A} and {B} to rotations
[Y3 (σ) + P3 (σ)] δσ, [Y4 (σ) + P4 (σ)] δσ
(57) on SO(4) via (51).
The covariance of RX then equals to Step 2: Solve the hand-eye calibration problem AX = XB
via the solution to Problem 2. Remap the calculated SO(4)
ΣRX = ΣRX,SO(4) (1 : 3, 1 : 3) (58) solution to SE(3) using (51).
where (1 : 3, 1 : 3) denotes the block containing first 3 rows Step 3: Obtain the covariance of the octonion σ related to X.
and columns. Finally, the covariance of tX is given by Compute the rotation-rotation and rotation-translation cross-
covariances via (55).
ΣtX = d2 Σδc4 δc4 (1 : 3, 1 : 3) (59)

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 8

IV. E XPERIMENTAL R ESULTS the following projection model for the utilized camera
 
lcam,1,j
A. Experiments on A Robotic Arm lcam,j = ,
lcam,2,j
 
lcam,1,j  T (60)
The first category of experiments are conducted for an
 lcam,2,j  = O Lcam,1,j , Lcam,2,j , 1
gripper-camera hand-eye calibration depicted in Fig. 1. The Lcam,3,j Lcam,3,j
dataset is generated using an UR10 robotic arm and an Intel 1
Realsense D435i camera attached firmly to the end-effector where lcam,j denotes the j-th measured feature points (corner)
(gripper) of the robotic arm (see Fig. 2). of the chessboard in the camera imaging frame; O is the
matrix accounting for the intrinsic parameters of the camera;
T
Lcam,j = (Lcam,1,j , Lcam,2,j , Lcam,3,j ) is the projected j-th
feature point in the camera ego-motion frame. To obtain the
i-th pose between the camera and chessboard, we can relate
the standard point coordinates of the chessboard Lchess,j for
j = 1, 2, · · · in the world frame from a certain model with
that in the camera frame by
Lchess,j = TAi Lcam,j (61)
By minimizing the projection errors from (61), TAi will
be obtained with nonlinear optimization techniques e.g. the
Perspective-n-Point algorithm [48], [49]. In our experiment,
the scale-invariant feature transform (SIFT) is invoked for ex-
traction of corner points of the chessboard [50]. We use several
Fig. 2. The gripper-camera hand-eye calibration experiment. datasets captured from our platform to produce comparisons
with representatives including classical ones of Tsai et al. [5],
Chou et al. [9], Park et al. [7], Daniilidis [11], Andreff et al.
The UR10 robotic arm can give accurate outputs of homoge- [12] and recent ones of Heller et al. [15], Zhang et al. [17],
neous transformations of various joints relative to its base. The Zhao [16]. The error of the hand-eye calibration is defined as
D435i camera contains color, depth and fisheye sub-cameras follows
along with and inertial measurement unit. In this sub-section v
uN
the transformation of the end-effector TBi of the robotic arm 1u X 2
is computed using those transformations from all joints via Error = t kAi X − XBi k (62)
N i=1
(7). We only pick up the color images from D435i to obtain
the transformation of the camera with respect to the 12 × 9 where Ai and Bi are detailed in (6). All the timing statistics,
chessboard. Note that in Fig. 1, the standard objects can be computation and visualization are carried out on a MacBook
arbitrary ones with certain pre-known models e.g. point cloud Pro 2017 with i7-3.5Ghz CPU along with the MATLAB
model and computer-aided-design (CAD) model. The D435i is r2018a software. All the algorithms are implemented using
factory-calibrated for its intrinsic parameters and we construct the least coding resources. We employ YALMIP to solve the

TABLE II
C OMPARISONS WITH CLASSICAL METHODS FOR GRIPPER - CAMERA HAND - EYE CALIBRATION .
Cases Tsai 1989 [5] Chou 1991 [9] Park 1994 [7] Daniilidis 1999 [11] Andreff 2001 [12] Proposed 4DPA 2019
Error Time (s) Error Time (s) Error Time (s) Error Time (s) Error Time (s) Error Time (s)
1 (224) 1.2046 × 10−02 4.0506 × 10−02 6.8255 × 10−03 1.0190 × 10−02 6.8254 × 10−03 3.5959 × 10−02 6.7082 × 10−03 3.8308 × 10−02 7.2650 × 10−03 5.7292 × 10−03 6.7857 × 10−03 4.4819 × 10−03
2 (253) 1.0243 × 10−02 4.2952 × 10−02 5.8650 × 10−03 1.1760 × 10−02 5.8650 × 10−03 3.9486 × 10−02 5.7704 × 10−03 4.1492 × 10−02 5.8754 × 10−03 6.8972 × 10−03 5.8290 × 10−03 4.8991 × 10−03
3 (298) 8.0653 × 10−03 4.9823 × 10−02 5.0514 × 10−03 1.3105 × 10−02 5.0517 × 10−03 4.6559 × 10−02 4.9084 × 10 −03 4.8626 × 10−02 5.4648 × 10 −03 7.2326 × 10−03 4.9803 × 10 −03 5.5470 × 10−03
4 (342) 6.8136 × 10−03 5.3854 × 10−02 4.7192 × 10−03 1.4585 × 10−02 4.7254 × 10−03 5.6970 × 10−02 4.0379 × 10−03 5.3721 × 10−02 4.8374 × 10−03 6.7880 × 10−03 4.0207 × 10−03 5.5404 × 10−03
5 (392) 5.5242 × 10−03 6.3039 × 10−02 3.4047 × 10−03 1.7253 × 10−02 3.4012 × 10−03 6.0697 × 10−02 3.3850 × 10−03 6.4505 × 10−02 4.0410 × 10−03 8.9159 × 10−03 3.1678 × 10−03 7.3957 × 10−03
6 (433) 4.8072 × 10−03 6.7117 × 10−02 2.9723 × 10−03 1.8354 × 10−02 2.9694 × 10−03 6.9967 × 10−02 2.8957 × 10−03 6.5703 × 10−02 3.6410 × 10−03 9.8837 × 10−03 2.7890 × 10−03 7.8954 × 10−03
7 (470) 4.3853 × 10−03 7.6869 × 10−02 2.7302 × 10−03 1.9827 × 10−02 2.7270 × 10−03 1.9827 × 10−02 2.6640 × 10−03 7.5553 × 10−02 3.3262 × 10−03 1.0373 × 10−02 2.5397 × 10−03 8.1632 × 10−03
8 (500) 4.0938 × 10−03 8.5545 × 10−02 2.5855 × 10−03 2.1506 × 10−02 2.5807 × 10−03 7.5722 × 10−02 2.4610 × 10−03 7.7225 × 10−02 3.0083 × 10−03 1.2008 × 10−02 2.3137 × 10−03 8.5382 × 10−03

TABLE III
C OMPARISONS WITH RECENT METHODS FOR GRIPPER - CAMERA HAND - EYE CALIBRATION .

Cases Heller 2014 [15] Zhang 2017 [17] Zhao 2018 [16] Proposed 4DPA 2019
Error Time (s) Error Time (s) Error Time (s) Error Time (s)
1 (224) 7.9803 × 10−03 1.4586 × 10−02 7.6125 × 10−03 8.8201 × 10−03 7.1485 × 10−03 8.0861 × 10−02 6.7857 × 10−03 4.4819 × 10−03
2 (253) 6.7894 × 10−03 1.5661 × 10−02 6.7908 × 10−03 9.7096 × 10−03 6.1710 × 10−03 1.0787 × 10−01 5.8290 × 10−03 4.8991 × 10−03
3 (298) 5.6508 × 10−03 1.8173 × 10−02 6.0203 × 10−03 1.2561 × 10−02 5.3021 × 10−03 1.4893 × 10−01 4.9803 × 10−03 5.5470 × 10−03
4 (342) 4.7192 × 10−03 2.2755 × 10−02 5.2743 × 10−03 1.5332 × 10−02 4.4003 × 10−03 2.2291 × 10−01 4.0207 × 10−03 5.5404 × 10−03
5 (392) 3.8706 × 10−03 2.5399 × 10−02 4.3269 × 10−03 1.8800 × 10−02 3.6041 × 10−03 3.0347 × 10−01 3.1678 × 10−03 7.3957 × 10−03
6 (433) 3.3504 × 10−03 2.6373 × 10−02 3.9732 × 10−03 2.1613 × 10−02 3.1658 × 10−03 3.8768 × 10−01 2.7890 × 10−03 7.8954 × 10−03
7 (470) 3.0547 × 10−03 2.7973 × 10−02 3.3633 × 10−03 2.5498 × 10−02 2.8912 × 10−03 5.0153 × 10−01 2.5397 × 10−03 8.1632 × 10−03
8 (500) 2.8862 × 10−03 2.9879 × 10−02 3.0215 × 10−03 3.2041 × 10−02 2.6871 × 10−03 5.8727 × 10−01 2.3137 × 10−03 8.5382 × 10−03

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 9

LMI dqhec optimization in the method of Heller et al. [15]. Acknowledgement). Every researcher can freely download this
For Zhao’s method [16], we invoke the fmincon function in dataset and evaluate the accuracy and computational efficiency.
MATLAB for numerical solution. The advantages of the developed method on both precision and
The robotic arm is rigidly installed on the testing table computation time will lead to very effective implementations
and is operated smoothly and periodically to capture the of hand-eye calibration for industrial applications in the future.
images of the chessboard from various directions. Using the
mechanisms described above we form the series of {A}, {B}.
B. The Error Sensitivity to the Noises and Different Parame-
We select d = 104 as scaling factor for evaluation in this
ters of d
sub-section as the translational components are all within
[−2, 2] m and in such range the camera has the empirical In this sub-section, we study the sensitivity of the proposed
positioning accuracy of about 0.05 ∼ 0.2 m. We choose the method subject to input measurement noises. We define the
F2 mapping in (51) for conversion from SE(3) to SO(4) since noise corrupted models of rotations as
in real applications it obtains much more accurate hand-eye
calibration results than the F1 mapping (49) presented in [37]. RA,i = R̂A,i + ErrorRX R1
(63)
The scalar thresholds for the other numerical methods are all RB,i = R̂B,i + ErrorRX R2
set to 1 × 10−15 to guarantee the accuracy. We conduct 8
where R1 , R2 are random matrices whose columns subject to
groups of the experiments using the experimental platform.
Gaussian distribution with covariances of I and ErrorRX is
The errors and computation timespans are processed 100 times
a scalar accounting for the rotation error. Likewise, the noise
for averaged performance evaluation, which are provided in
models of translations can be given by
Table II and III. The least errors are marked using the green
color and the best ones for computation time are tagged in tA,i = t̂A,i + ErrortX T1
blue. The statistics of the proposed method are marked bold in (64)
tB,i = t̂B,i + ErrortX T2
the tables for emphasis. The digits after the case serial numbers
indicate the sample counts for the studied case. with noise scale of ErrortX and T1 , T2 noise vectors subject to
One can see that with growing sample counts, all the meth- normal distribution also with covariance of I. The perturbed
ods obtain more accurate estimates of X. While with larger rotations are orthonormalized after the addition of the noises.
quantities of measurements, the processing speeds for the algo- Here, the Gaussian distribution is selected by following the
rithms become slower. However, among all methods, despite tradition in [19] since this assumption of distribution covers
they are analytical or iterative, the proposed SO(4) method most cases that we may encounter in the real-world applica-
almost always gives the most accurate results within the least tions.
computation time. The reason is that the proposed 4DPA We take all the compared representatives from last sub-
computes the rotation and translation in X simultaneously and section to this part by adding three more ones of the proposed
well optimizes the loss function J of the hand-eye calibration method with different d of d = 103 , d = 105 and d = 106 . The
problem. The proposed algorithm can obtain better results than we can both see the comparisons with the representatives and
almost all other analytical and numerical ones, except for the observe the influence of the positive scaling factor d. Several
cases 1 ∼ 3 using the method of Daniilidis. This indicates simulations are conducted where we generate datasets of A, B
that with few samples, the accuracy of the proposed 4DPA is with N = 1000 and the obtained results are averaged for
lower than the method of Daniilidis but is still close. However, 10 times. We independently evaluate the effect of ErrorRX
few samples indicate relative low confidence in calibration and ErrortX imposed on Error. The relationship between
accuracy and for cases with higher quantities of measurements ErrorRX and Error is depicted in Fig. 3 while that between
the proposed 4DPA method is always the best. This shows that ErrortX and Error is presented in Fig. 4. These relationships
the designed 4D Procrustes analysis for mapping from SE(3) are demonstrated in the form of the log plot. We can see
to SO(4) is more efficient than other tools e.g. the mappings that with increasing errors in rotation and translation, the errors
based on dual quaternion [11] and Kronecker product [12]. in the computed X all arise to a large extent. One can see in
Furthermore, our method uses the eigen-decomposition as the the magnified plot of the Fig. 3 that the optimization methods
solution that is regarded as a robust tool for engineering achieves the best accuracy but the proposed method can also
problems. Our method can reduce the estimation error to obtain comparable estimates. It is shown that with various
about 3.06% ∼ 94.01% of original stats compared with values of d, the performances of the proposed method differ
classical algorithms and 0.39% ∼ 86.11% compared with quite a lot. Fig. 4 indicates that with d = 103 , the evaluated
recent numerical ones. The proposed method is also free of errors on translation are the worst among all compared ones.
pre-processing techniques like quaternion conversion in other However, with larger d, this situation has been significantly
algorithms. All the matrix operations are simple and intuitive improved, generating the magnified image in Fig. 4 that when
which makes the computation very fast. Our method can d = 105 and d = 106 , the errors of the proposed method
lower the computation time to about 9.98% ∼ 70.68% of are quite close to the least ones. As in last sub-section we
original stats compared with classical analytical algorithms have tested that the proposed method has the fastest execution
and 1.45% ∼ 28.58% compared with recent numerical ones. speed, it is shown that for the studied cases the developed
A synchronized sequence of camera-chessboard poses and method can be regarded as a balancing one between the
end-effector poses is made open-source (see the links in the accuracy and computation speed.

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 10

T
δRX,SO(4) δRX,SO(4)
T T T
  
δRX δtX /d δRX −RX δtX /d − δRX tX /d
= T T
−δtX RX /d − tX δRX /d 0 δtTX /d 0 (65)
T 1 T 1 T T
  
δRX δRX + d2 δtX δtX − d
δRX RX δtX + δRX δRX tX
=
− d1 δtTX RX δRXT
+ tTX δRX δRXT 1
tTX RX RX
T
δtX + tTX RX δRXT
tX + tTX δRX RX
T
δtX + tTX δRX δRX
T
 
d2
tX

D E
T
ΣRX,SO(4) = δRX,SO(4) δRX,SO(4)
ΣRX + d12 ΣtX − d1 (hδtX × δθX i + ΣRX tX ) 
 
= 1 T 1 T T (66)
− (hδtX × δθX i + ΣRX tX ) 2 tX hδtX × δθX i + tX ΣRX tX
 d 1 1
d
ΣRX + d2 ΣtX − d ΣRX tX

− d1 tTX ΣRX 1 T
t Σ
d2 X RX X
t

time iteratively in [19]. It works very well with both synthetic


and real-world data. However, it still has its drawbacks:
1) The covariance of the rotation RX is independently
estimated from RA RX = RX RB but in fact the
accuracy of RX is also affected by tA and tB .
2) The covariances of RX and tX are required to be
computed iteratively while how many iterations would
be sufficient to provide accurate enough results is still an
unsolved problem.
Hence the covariance should also be decided by tA and tB
and, if possible, do not require iterations. The proposed SO(4)
method in this paper, however, simultaneously estimates the
RX and tX together and can also generate the analytical
probabilistic information within several deterministic steps
considering the tightly coupled relationship inside AX =
XB. Let us define the ξRX ,x , ξRX ,y and ξRX ,z as errors
in rotation RX around X, Y, Z axes while ξtX ,x , ξtX ,y and
Fig. 3. The sensitivity of errors subject to input rotation noises. ξtX ,z being errors in translation tX about the X, Y, Z axes,
respectively. Given the covariances ΣRX , ΣtX , the covariance
of the equivalent SO(4) transformation can be computed by
(66) where we have
 
δRX δtX /d
δRX,SO(4) = T T
−δt R /d − t δR /d 0
(67)
X X X X

T
and δRX,SO(4) δRX,SO(4) is simplified from (65) to (66)
according to [51]
ṘX = −[ω]× RX
δRX = −[δθX ]× RX (68)
T
δRX RX = −[δθX ]×

in which ω is the angular velocity vector and θX denotes the


small angle rotation of RX . Therefore, with ΣRAi , ΣtAi ,
ΣRBi , ΣtBi , we can compute ΣRAi ,SO(4) and ΣRBi ,SO(4) .
In this paper, we consider that the system errors in each
Fig. 4. The sensitivity of errors subject to input translation noises. measurement step are identical so we have
ΣRAi = ΣRA , ΣtAi = ΣtA
(69)
ΣRBi = ΣRB , ΣtBi = ΣtB
C. Simulations on Uncertainty Descriptions Now we conduct the same simulation as that pro-
The uncertainty description of the hand-eye calibration vided in the Python open-source codes of Nguyen et al.
problem AX = XB is studied by Nguyen et al. for the first [19] (https://github.com/dinhhuy2109/python-cope,

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 11

Fig. 5. The 2D covariance projections of the solutions to hand-eye calibration using the proposed method and that of Nguyen et al. The dashed grey lines
indicate the mean bounds of the simulated statistics. The green dashed lines are from the solution of Nguyen et al. while the solid black ones are from our
proposed algorithm. The discrete points in blue reflect the simulated samples.

./examples/test_axxb_covariance.py). The input co- platform contains a high-end 2D laser scanner of Hokuyo
variances are UST 10-lx spinned by a powerful Dynamixel MX-28T servo
ΣRA = 10−10 I, ΣtA = 10−10 I controlled through the serial ports by the onboard Nvidia TX1

4.15625 −2.88693 −0.60653
 computer with graphics processing unit (GPU). It also consists
ΣRB =  −2.88693 32.0952 −0.14482  × 10−5 of an Intel Realsense T265 fisheye camera with resolution
−0.60653 −0.14482 1.43937 (70) of 848x800 and frame rate of 30fps, along with an onboard

19.52937 2.12627 −1.06675
 factory-calibrated inertial measurement unit (IMU). The spin
ΣtB =  2.12627 4.44314426 0.38679  × 10−5 mechanism and the feedback of internal encoder of the servo
−1.06675 0.38679 2.13070 guarantee the seamless stitching of successive laser scans that
The simulation is carried out for 10000 times, generating produce highly accurate 3D scene reconstructions.
the randomly perturbed {A}, {B} and in each set there are
60 measurements. Σb is computed according to the simulated
statistics for {A}, {B}. The statistical covariance bounds
of the estimated RX and tX are then logged. Using the
method by Nguyen et al. and our proposed method, the 2D
covariance projections are plotted in Fig. 5. One can see
that the both methods can estimate the covariance correctly
while our method achieves very slightly smaller covariances
bounds. This reflects that our proposed method has reached the
accuracy of Nguyen et al. for uncertainty descriptions. What
needs to be pointed out is that the proposed method is fully
analytical rather than the iterative solution in the method of
Nguyen et al. While analytical methods are always much faster
than iterative ones, this simulation has indirectly reflected that
the proposed method can both correctly estimate the trans-
formation and determine the precision covariance information
within short computational period, which is beneficial to those
applications with high demands on real-time system with
rigorous scheduling logics and timing.

D. Extension to The Extrinsic Calibration between 3D Laser


Scanner and A Fisheye Camera
In this sub-section, the developed 4DPA method in this
paper is employed to solve the extrinsic calibration problem
between a 3D laser scanner and a fisheye camera, mounted Fig. 7. The experimental platform equipped with a 3D laser scanner and a
rigidly on an experimental platform shown in Fig. 7. This fisheye camera, along with other processing devices.

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 12

Fig. 6. The reconstructed scenes using the presented 3D laser scanner. They are later used for pose estimation of the laser scanner frame with ICP.

The single sensor or combination of laser scanner and as indicated in Fig. 6. As the camera and laser-scanner poses
camera will be of great importance for scene measurement have the output frequencies of 200Hz and 1Hz respectively,
and reconstruction [52], [53], [54]. However, due to inevitable the synchronization between them is conducted by continuous
installation misalignments, the extrinsic calibration between linear quaternion interpolation that we developed recently [43].
the laser scanner and camera should be performed for reliable Then using the properly synced TAi and TBi , we are able to
measurement accuracy. Several algorithms have been proposed form the proposed hand-eye calibration principle with entry
to deal with the calibration issues inside these sensors recently point equation in (6). With procedures shown in Algorithm 1
[55], [56], [57]. These methods in fact require some standard where d is set to d = 105 empirically, the extrinsic parameters
objects like large chessboards to obtain satisfactory results. We i.e. the rotation and translation between the laser scanner and
here extend our method to solving this extrinsic calibration, fisheye camera, are calculated.
without needs of any other standard reference systems. The
sensor signal flowchart can be seen from Fig. 8.

Fig. 8. The signal flowchart in the extrinsic calibration between the 3D laser
scanner and fisheye camera using the proposed algorithm.

For the developed system, we can gather three sources of


data i.e. images from the fisheye camera, inertial readings
of angular rate and acceleration, and 3D laser scans. At the Fig. 9. The projected XY trajectories before and after the extrinsic calibration
first stage, the fisheye camera and IMU measurements are between 3D laser scanner and fisheye camera.
processed via feature extraction [50] and navigation integral
mechanisms [58], respectively. Then they are integrated to- Then these parameters are applied to the developed platform
gether for the camera pose, denoted as TAi with index of i, for 3D trajectory verification using the V-LOAM method [60].
using the method in [59]. The pose of the 3D laser scanner, We put the system into measurement mode and then start
denoted as TBi with index i, is computed via the 3D ICP [33], moving it from origin to origin. Then, when computing the

TABLE IV
T RAJECTORY E RRORS B EFORE AND A FTER THE E XTRINSIC C ALIBRATION U SING T HE P ROPOSED M ETHOD

Experiment Before: X(m) After: X(m) Before: Y(m) After: Y(m) Before: Z(m) After: Z(m)
1 1.997 0.725 1.803 0.476 0.828 0.379
2 2.278 1.080 1.722 0.997 1.322 0.763
3 1.463 0.605 1.825 0.583 1.199 0.691

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 13

trajectories with uncalibrated and calibrated data, we can find


ap − bq − cr − ds
 
out that the trajectory after the calibration has much less
aq + bp + cs − dr
odometric errors (see Fig. 9). In later periods, the similar c1 =  
ar + cp − bs + dq 
experiment is repeated twice. The detailed statistics of the
trajectory errors are presented in Table IV containing results as + br − cq + dp
on each of the X, Y, Z axes. One can see that the errors have One would be very easy to verify that c1 = P1 (σ)σ. Then
been significantly reduced after calibration, which indicates the the similar factorization can be established for c2 , c3 , c4 and
effectiveness of the proposed calibration scheme in real scene r1 , r2 , r3 , r4 respectively, generating the following results:
measurement application. Also, the results of the proposed 
p −q −r −s a −b −c −d

4DPA method for hand-eye calibration will be affected by the 1  q p s −r b a −d c 
P1 (σ) = √ 
value d, as described in previous sections. Therefore, a study 2 r −s p q c d a −b 
on such influence is conducted using the data for experiment s r −q p d −c b a
−q −p s −r −b −a −d c
 
3 (see Table V).
1  p −q r s a −b c d 
P2 (σ) = √ 
2 −s −r −q p d −c −b −a 
TABLE V r −s −p −q −c −d a −b
T HE VERIFIED 3D TRAJECTORY ERRORS AFTER HAND - EYE CALIBRATION
−r −s −p q −c d −a −b
 
WITH DIFFERENT VALUES OF d (E XPERIMENT 3)
1  s −r −q −p −d −c −b a 
P3 (σ) = √ 
d X(m) Y(m) Z(m) 2 p q −r s a b −c d 
−q p −s −r b −a −d −c
1 × 103 2.068 1.275 2.342
−s r −q −p −d −c b −a
 
1 × 104 1.472 0.637 1.944
1 × 105 1.463 0.605 1.825 1  −r −s p −q c −d −a −b 
P4 (σ) = √ 
1 × 106 1.462 0.600 1.799 2 q −p −s −r −b a −d −c 
1 × 107 1.462 0.599 1.798 p q r −s a b c −d
p −q −r −s a −b −c −d
 
1  −q −p s −r −b −a −d c 
We tune the d from 1 × 103 to 1 × 107 . The errors indicate Q1 (σ) = √ 
2 −r −s −p q −c d −a −b 
that the chosen value d = 1 × 105 in this sub-section results −s r −q −p −d −c b −a
in sufficiently accurate estimates. And with larger values of
q p s −r b a −d c
 
d, the error bounds almost reach their limits. While for those
small values of d, we can see that they can not deal with the 1  p −q r s a −b c d 
Q2 (σ) = √ 
2 s −r −q −p −d −c −b a 
calibration accurately. The reason is that the approximation in −r −s p −q c −d −a −b
(51) requires large d for more precise computation (but not
r −s p q c d a −b
 
too large, see Section III.D). The optimal dynamic selection 1  −s −r −q p d −c −b −a 
of the parameter d will be the next task for us in the near Q3 (σ) = √ 
2 p q −r s a b −c d 
future. q −p −s −r −b a −d −c
s r −q p d −c b a
 

V. C ONCLUSION 1  r −s −p −q −c −d a −b 
Q4 (σ) = √ 
2 −q p −s −r b −a −d −c 
This paper studies the classical hand-eye calibration prob- p q r −s a b c −d
lem AX = XB by exploiting a new generalized method on The results of Jjk,i can then be computed using symbolic
SO(4). The investigated 4D Procrustes analysis provides us computation tools e.g. MATLAB and Mathematica:
with very useful closed-form results for hand-eye calibration.
J11,i =
With such framework, the uncertainty descriptions of the
e11 − z11 e12 + z21 e13 + z31 e14 + z41
 
obtained transformations can be easily computed. It is verified  e12 + z21 z11 − e11 e14 − z41 z31 − e13 
that the proposed method can achieve better accuracy and  e +z
13 31 z41 − e14 z11 − e11 e12 − z21 
much less computation time than representatives in real-world e14 + z41 e13 − z31 z21 − e12 z11 − e11
datasets. The proposed uncertainty descriptions for the 4 × 4 J12,i =
matrices are also universal to other similar problems like 
e21 − z21 e22 − z11 e23 + z41 e24 − z31

spacecraft attitude determination [29] and 3D registration [32].  e22 − z11 z21 − e21 e24 + z31 z41 − e23 
 e −z z31 − e24 −e21 − z21 e22 − z11 
We also notice that the Procrustes analysis on SO(n) may be of 23 41

benefit to solve the generalized hand-eye problem AX = XB e24 + z31 e23 + z41 z11 − e22 −e21 − z21
in which SE(n) and this is going to be discussed in the next J13,i =
e31 − z31 e32 − z41 e33 − z11 e34 + z21
 
task for us in further works.
 e32 + z41 −e31 − z31 e34 + z21 z11 − e33 
 e −z z21 − e34 z31 − e31 e32 + z41 
33 11
A PPENDIX A e34 − z21 e33 − z11 z41 − e32 −e31 − z31
S OME C LOSED - FORM R ESULTS J14,i =
A. Analytical Forms of Some Fundamental Matrices e41 − z41 e42 + z31 e43 − z21 e44 − z11
 
 e42 − z31 −e41 − z41 e44 − z11 z21 − e43 
Taking c1 = P1 (σ)σ as an example, one can explicitly  e +z z11 − e44 −e41 − z41 e42 + z31 
43 21
write out e44 − z11 e43 + z21 z31 − e42 z41 − e41

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 14

where ejk , zjk , j, k = 1, 2, 3, 4 are matrix entries of Ei and Zi


J21,i = respectively. Note that these computation procedures can also

e12 − z12 z22 − e11 e14 + z32 z42 − e13
 be found out at https://github.com/zarathustr/
 z22 − e11 z12 − e12 −e13 − z42 z32 − e14  hand_eye_SO4.
 z −e z42 − e13 e12 + z12 e11 − z22 
32 14
e13 + z42 −e14 − z32 z22 − e11 e12 + z12
B. Matrix Determinant Property
J22,i =

e22 − z22 −e21 − z12 e24 + z42 −e23 − z32
 Given an arbitrary square matrix
 −e21 − z12 z22 − e22 z32 − e23 z42 − e24 
 
A B
 −e − z
24 42 z32 − e23 e22 − z22 e21 − z12  M=
C D
e23 + z32 z42 − e24 z12 − e21 e22 − z22
J23,i = If D is invertible, then the determinant of M is
e32 − z32 −e31 − z42 e34 − z12 z22 − e33
 
det (M ) = det (D) det A − BD −1 C

 z42 − e31 −e32 − z32 z22 − e33 z12 − e34 
 −e − z z22 − e33 e32 + z32 e31 + z42 
34 12
Inserting the above result into
e33 − z22 −e34 − z12 z42 − e31 e32 − z32  
J24,i =  λW ,max I −K
det λW ,max I − W = det

e42 − z42 z32 − e41 e44 − z22 −e43 − z12
 −K T λW ,max I
 −e41 − z32 −e42 − z42 −e43 − z12 z22 − e44 
 z −e z12 − e43 e42 − z42 e41 + z32  gives (26).
22 44
e43 − z12 z22 − e44 z32 − e41 e42 + z42
J31,i = ACKNOWLEDGMENT
e13 − z13 z23 − e14 z33 − e11 e12 + z43
 
This research was supported by Shenzhen Science,
 e14 + z23 e13 + z13 −e12 − z43 z33 − e11  Technology and Innovation Comission (SZSTI)
 z −e z43 − e12 z13 − e13 −e14 − z23 
33 11
z43 − e12 e11 − z33 z23 − e14 e13 + z13
JCYJ20160401100022706, in part by National Natural
Science Foundation of China under the grants of No.
U1713211 and 41604025. The authors thank to Dr.
J32,i =
Zhiqiang Zhang from University of Leeds for his detailed
e23 − z23 −e24 − z13 z43 − e21 e22 − z33
 
explanation of the implemented codes in [17]. The authors
 e24 − z13 e23 + z23 z33 − e22 z43 − e21 
 −e − z
21 43 z33 − e22 −e23 − z23 −e24 − z13  are grateful to Dr. Huy Nguyen from Nanyang Technological
z33 − e22 e21 + z43 z13 − e24 e23 − z23 University, Singapore, for the discussion with him on his
J33,i = useful codes of the uncertainty descriptions of hand-eye

e33 − z33 −e34 − z43 −e31 − z13 e32 + z23
 calibration [19]. The authors also would like to thank
 e34 + z43 e33 − z33 z23 − e32 z13 − e31  Dr. Dario Modenini from University of Bologna and
 −e − z
31 13 z23 − e32 z33 − e33 z43 − e34  Prof. Daniel Condurache from Gheorghe Asachi Technical
−e32 − z23 e31 − z13 z43 − e34 e33 − z33 University of Iasi for constructive communications. The
J34,i = codes and data of this paper have been archived on
e43 − z43 z33 − e44 −e41 − z23 e42 − z13 https://github.com/zarathustr/hand_eye_SO4 and
 
 e44 − z33 e43 − z43 −e42 − z13 z23 − e41  https://github.com/zarathustr/hand_eye_data.
 z −e z13 − e42 −e43 − z43 z33 − e44 
23 41
−e42 − z13 e41 + z23 z33 − e44 e43 + z43
R EFERENCES
J41,i =

e14 − z14 e13 + z24 z34 − e12 z44 − e11
 [1] K. Koide and E. Menegatti, “General Hand–Eye Calibration Based on
Reprojection Error Minimization,” IEEE Robot. Autom. Lett., vol. 4,
 z24 − e13 e14 + z14 e11 − z44 z34 − e12 
no. 2, pp. 1021–1028, 2019.
 e +z
12 34 z44 − e11 e14 + z14 −e13 − z24  [2] Y.-T. Liu, Y.-A. Zhang, and M. Zeng, “Sensor to Segment Calibration
z44 − e11 −e12 − z34 z24 − e13 z14 − e14 for Magnetic and Inertial Sensor Based Motion Capture Systems,”
J42,i = Measurement, 2019. DOI: 10.1016/j.measurement.2019.03.048
[3] Z. Q. Zhang, “Cameras and Inertial/Magnetic Sensor Units Alignment
e24 − z24 e23 − z14 z44 − e22 −e21 − z34
 
Calibration,” IEEE Trans. Instrum. Meas., vol. 65, no. 6, pp. 1495–1502,
 −e23 − z14 e24 + z24 e21 + z34 z44 − e22  2016.
 e −z z34 − e21 e24 − z24 −e23 − z14 
22 44 [4] D. Modenini, “Attitude Determination from Ellipsoid Observations: A
z34 − e21 z44 − e22 z14 − e23 −e24 − z24 Modified Orthogonal Procrustes Problem,” AIAA J. Guid. Control Dyn.,
J43,i = vol. 41, no. 10, pp. 2320–2325, 2018.
[5] R. Y. Tsai and R. K. Lenz, “A New Technique for Fully Autonomous
e34 − z34 e33 − z44 −e32 − z14 z24 − e31
 
and Efficient 3D Robotics Hand/Eye Calibration,” IEEE Trans. Robot.
 z44 − e33 e34 − z34 e31 + z24 z14 − e32  Autom., vol. 5, no. 3, pp. 345–358, 1989.
 e −z z24 − e31 e34 + z34 z44 − e33  [6] Y. C. Shiu and S. Ahmad, “Calibration of Wrist-Mounted Robotic
32 14
−e31 − z24 −e32 − z14 z44 − e33 −e34 − z34 Sensors by Solving Homogeneous Transform Equations of the Form
AX = XB,” IEEE Trans. Robot. Autom., vol. 5, no. 1, pp. 16–29, 1989.
J44,i = [7] F. C. Park and B. J. Martin, “Robot Sensor Calibration: Solving AX
e44 − z44 e43 + z34 −e42 − z24 −e41 − z14
 
= XB on the Euclidean Group,” IEEE Trans. Robot. Autom., vol. 10,
 −e43 − z34 e44 − z44 e41 − z14 z24 − e42  no. 5, pp. 717–721, 1994.
 e +z
42 24 z14 − e41 e44 − z44 z34 − e43  [8] R. Horaud and F. Dornaika, “Hand-eye Calibration,” Int. J. Robot.
−e41 − z14 z24 − e42 z34 − e43 z44 − e44 Research, vol. 14, no. 3, pp. 195–210, 1995.

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 15

[9] J. C. Chou and M. Kamel, “Finding the Position and Orientation of [37] F. Thomas, “Approaching Dual Quaternions from Matrix Algebra,” IEEE
a Sensor on A Robot Manipulator Using Quaternions,” Int. J. Robot. Trans. Robot., vol. 30, no. 5, pp. 1037–1048, 2014.
Research, vol. 10, no. 3, pp. 240–254, 1991. [38] W. S. Massey, “Cross Products of Vectors in Higher Dimensional
[10] Y.-C. Lu and J. Chou, “Eight-space Quaternion Approach for Robotic Euclidean Spaces,” America. Math. Monthly, vol. 90, no. 10, pp. 697–
Hand-eye Calibration,” IEEE ICSMC 2002, pp. 3316–3321, 2002. 701, 1983.
[11] K. Daniilidis, “Hand-eye Calibration Using Dual Quaternions,” Int. J. [39] J. Wu, Z. Zhou, B. Gao, R. Li, Y. Cheng, and H. Fourati, “Fast Linear
Robot. Research, vol. 18, no. 3, pp. 286–298, 1999. Quaternion Attitude Estimator Using Vector Observations,” IEEE Trans.
[12] N. Andreff, R. Horaud, and B. Espiau, “Robot Hand-eye Calibration Auto. Sci. Eng., vol. 15, no. 1, pp. 307–319, 2018.
Using Structure-from-Motion,” Int. J. Robot. Research, vol. 20, no. 3, [40] J. Wu, M. Liu, Z. Zhou, and R. Li, “Fast Symbolic 3D Registration
pp. 228–248, 2001. Solution,” IEEE Trans. Auto. Sci. Eng., 2019. arxiv: 1805.08703
[13] D. Condurache and A. Burlacu, “Orthogonal Dual Tensor Method for [41] I. Y. Bar-Itzhack, “New Method for Extracting the Quaternion from a
Solving the AX = XB Sensor Calibration Problem,” Mech. Machine Rotation Matrix,” AIAA J. Guid. Control Dyn., vol. 23, no. 6, pp. 1085–
Theory, vol. 104, pp. 382–404, 2016. 1087, 2000.
[14] S. Gwak, J. Kim, and F. C. Park, “Numerical Optimization on the [42] A. H. J. D. Ruiter and J. Richard, “On the Solution of Wahba’s Problem
Euclidean Group with Applications to Camera Calibration,” IEEE Trans. on SO(n),” J. Astronautical Sci., no. December, pp. 734–763, 2014.
Robot. Autom., vol. 19, no. 1, pp. 65–74, 2003. [43] J. Wu, “Optimal Continuous Unit Quaternions from Rotation Matrices,”
[15] J. Heller, D. Henrion, and T. Pajdla, “Hand-eye and Robot-World AIAA J. Guid. Control Dyn., vol. 42, no. 4, pp. 919–922, 2019.
Calibration by Global Polynomial Optimization,” IEEE ICRA 2014, pp. [44] J. Wu, M. Liu, and Y. Qi, “Computationally Efficient Robust Algorithm
3157–3164, 2014. for Generalized Sensor Calibration Problem AR = RB,” IEEE Sensors
[16] Z. Zhao, “Simultaneous Robot-World and Hand-eye Calibration by the J., 2019. DOI: 10.13140/RG.2.2.17632.74240
Alternative Linear Programming,” Pattern Recogn. Lett., 2018. DOI: [45] P. Lourenço, B. J. Guerreiro, P. Batista, P. Oliveira, and C. Silvestre,
10.1016/j.patrec.2018.08.023 “Uncertainty Characterization of The Orthogonal Procrustes Problem
[17] Z. Zhang, L. Zhang, and G. Z. Yang, “A Computationally Efficient with Arbitrary Covariance Matrices,” Pattern Recogn., vol. 61, pp. 210–
Method for Hand-eye Calibration,” Int. J. Comput. Assist. Radio. Surg., 220, 2017.
vol. 12, no. 10, pp. 1775–1787, 2017. [46] R. L. Dailey, “Eigenvector Derivatives with Repeated Eigenvalues,”
[18] H. Song, Z. Du, W. Wang, and L. Sun, “Singularity Analysis for the AIAA J., vol. 27, no. 4, pp. 486–491, 1989.
Existing Closed-Form Solutions of the Hand-eye Calibration,” IEEE [47] G. Chang, T. Xu, and Q. Wang, “Error Analysis of Davenport’s q-
Access, vol. 6, pp. 75 407–75 421, 2018. method,” Automatica, vol. 75, pp. 217–220, 2017.
[19] H. Nguyen and Q.-C. Pham, “On the Covariance of X in AX = XB,” [48] X.-S. Gao, X.-R. Hou, J. Tang, and H.-f. Cheng, “Complete Solution
IEEE Trans. Robot., vol. 34, no. 6, pp. 1651–1658, 2018. Classification for the Perspective-Three-Point Problem,” IEEE Trans.
[20] Y. Tian, W. R. Hamel, and J. Tan, “Accurate Human Navigation Using Pattern Anal. Mach. Intell., vol. 25, no. 8, pp. 930–943, 2003.
Wearable Monocular Visual and Inertial Sensors,” IEEE Trans. Instrum. [49] T. Hamel and C. Samson, “Riccati Observers for the Nonstationary PnP
Meas., vol. 63, no. 1, pp. 203–213, 2014. Problem,” IEEE Trans. Autom. Control, vol. 63, no. 3, pp. 726–741,
[21] A. Alamri, M. Eid, R. Iglesias, S. Shirmohammadi, and A. E. Saddik, 2018.
“Haptic Virtual Rehabilitation Exercises for Post- stroke Diagnosis,” [50] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Key-
IEEE Trans. Instrum. Meas., vol. 57, no. 9, pp. 1–10, 2007. points,” Int. J. Comput. Vision, vol. 60, no. 2, pp. 91–110, 2004.
[22] S. Liwicki, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “Euler [51] R. Mahony, T. Hamel, and J. M. Pflimlin, “Nonlinear Complementary
Principal Component Analysis,” Int. J. Comput. Vision, vol. 101, no. 3, Filters on the Special Orthogonal Group,” IEEE Trans. Autom. Control,
pp. 498–518, 2013. vol. 53, no. 5, pp. 1203–1218, 2008.
[23] A. Bartoli, D. Pizarro, and M. Loog, “Stratified Generalized Procrustes [52] P. Payeur and C. Chen, “Registration of Range Measurements with
Analysis,” Int. J. Comput. Vision, vol. 101, no. 2, pp. 227–253, 2013. Compact Surface Representation,” IEEE Trans. Instrum. Meas., vol. 52,
[24] L. Igual, X. Perez-Sala, S. Escalera, C. Angulo, and F. De La Torre, no. 5, pp. 1627–1634, 2003.
“Continuous Generalized Procrustes analysis,” Pattern Recogn., vol. 47, [53] L. Wei, C. Cappelle, and Y. Ruichek, “Camera/Laser/GPS Fusion
no. 2, pp. 659–671, 2014. Method for Vehicle Positioning under Extended NIS-Based Sensor
[25] C. I. Mosier, “Determining A Simple Structure When Loadings for Validation,” IEEE Trans. Instrum. Meas., vol. 62, no. 11, pp. 3110–
Certain Tests Are Known,” Psychometrika, vol. 4, no. 2, pp. 149–162, 3122, 2013.
1939. [54] A. Wan, J. Xu, D. Miao, and K. Chen, “An Accurate Point-Based
[26] B. F. Green, “The Orthogonal Approximation of An Oblique Structure Rigid Registration Method for Laser Tracker Relocation,” IEEE Trans.
in Factor Analysis,” Psychometrika, vol. 17, no. 4, pp. 429–440, 1952. Instrum. Meas., vol. 66, no. 2, pp. 254–262, 2017.
[27] M. W. Browne, “On Oblique Procrustes Rotation,” Psychometrika, [55] Y. Zhuang, N. Jiang, H. Hu, and F. Yan, “3-D-Laser-Based Scene
vol. 32, no. 2, pp. 125–132, 1967. Measurement and Place Recognition for Mobile Robots in Dynamic
[28] G. Wahba, “A Least Squares Estimate of Satellite Attitude,” SIAM Rev., Indoor Environments,” IEEE Trans. Instrum. Meas., vol. 62, no. 2, pp.
vol. 7, no. 3, p. 409, 1965. 438–450, 2013.
[29] M. D. Shuster and S. D. Oh, “Three-axis Attitude Determination from [56] Z. Hu, Y. Li, N. Li, and B. Zhao, “Extrinsic Calibration of 2-D Laser
Vector Observations,” AIAA J. Guid. Control Dyn., vol. 4, no. 1, pp. Rangefinder and Camera From Single Shot Based on Minimal Solution,”
70–77, 1981. IEEE Trans. Instrum. Meas., vol. 65, no. 4, pp. 915–929, 2016.
[30] R. Horaud, F. Forbes, M. Yguel, G. Dewaele, and J. Zhang, “Rigid [57] S. Xie, D. Yang, K. Jiang, and Y. Zhong, “Pixels and 3-D Points
and Articulated Point Registration with Expectation Conditional Maxi- Alignment Method for the Fusion of Camera and LiDAR Data,” IEEE
mization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 3, pp. Trans. Instrum. Meas., 2018. DOI: 10.1109/TIM.2018.2879705
587–602, 2011. [58] Yuanxin Wu, Xiaoping Hu, Dewen Hu, Tao Li, and Junxiang Lian,
[31] N. Duta, A. K. Jain, and M. P. Dubuisson-Jolly, “Automatic Construction “Strapdown Inertial Navigation System Algorithms Based on Dual
of 2D Shape Models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, Quaternions,” IEEE Trans. Aerosp. Elec. Syst., vol. 41, no. 1, pp. 110–
no. 5, pp. 433–446, 2001. 132, 2005.
[32] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-Squares Fitting [59] N. Enayati, E. D. Momi, and G. Ferrigno, “A Quaternion-Based Un-
of Two 3-D Point Sets,” IEEE Trans. Pattern Anal. Mach. Intell., vol. scented Kalman Filter for Robust Optical/Inertial Motion Tracking in
PAMI-9, no. 5, pp. 698–700, 1987. Computer-Assisted Surgery,” IEEE Trans. Instrum. Meas., vol. 64, no. 8,
[33] P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” pp. 2291–2301, 2015.
IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239–256, [60] J. Zhang, M. Kaess, and S. Singh, “A Real-time Method for Depth
1992. Enhanced Visual Odometry,” Auto. Robots, vol. 41, no. 1, pp. 31–43,
[34] M. Wang and A. Tayebi, “Hybrid Pose and Velocity-bias Estimation on 2017.
SE(3) Using Inertial and Landmark Measurements,” IEEE Trans. Autom.
Control, 2018. DOI: 10.1109/TAC.2018.2879766
[35] J. Markdahl and X. Hu, “Exact Solutions to A Class of Feedback
Systems on SO(n),” Automatica, vol. 63, pp. 138–147, 2016.
[36] J. D. Biggs and H. Henninger, “Motion Planning on a Class of 6-D Lie
Groups via A Covering Map,” IEEE Trans. Autom. Control, 2018. DOI:
10.1109/TAC.2018.2885241

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIM.2019.2930710, IEEE
Transactions on Instrumentation and Measurement
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 16

Jin Wu was born in May, 1994 in Zhenjiang, Ming Liu received the B.A. degree in automa-
China. He received the B.S. Degree from University tion from Tongji University, Shanghai, China, in
of Electronic Science and Technology of China, 2005, and the Ph.D. degree from the Department of
Chengdu, China. He has been a research assistant in Mechanical and Process Engineering, ETH Zürich,
Department of Electronic and Computer Engineer- Zürich, Switzerland, in 2013, supervised by Prof.
ing, Hong Kong University of Science and Tech- Roland Siegwart. During his master’s study with
nology since 2018. His research interests include Tongji University, he stayed one year with the
robot navigation, multi-sensor fusion, automatic con- Erlangen-Nünberg University and Fraunhofer Insti-
trol and mechatronics. He is a co-author of over tute IISB, Erlangen, Germany, as a Master Visiting
30 technical papers in representative journals and Scholar.
conference proceedings of IEEE, AIAA, IET and He is currently with the Electronic and Computer
etc. Mr. Jin Wu received the outstanding reviewer award for A SIAN J OURNAL Engineering, Computer Science and Engineering Department, Robotics Insti-
OF C ONTROL . One of his papers published in IEEE T RANSACTIONS ON tute, The Hong Kong University of Science and Technology, Hong Kong, as
AUTOMATION S CIENCE AND E NGINEERING was selected as the ESI Highly an Assistant Professor. He is also a founding member of Shanghai Swing
Cited Paper by ISI Web of Science during 2017 to 2018. He is a member of Automation Ltd., Co. He is coordinating and involved in NSF Projects and
IEEE. National 863-Hi-TechPlan Projects in China. His research interests include
dynamic environment modeling, deep-learning for robotics, 3-D mapping,
machine learning, and visual control.
Dr. Liu was a recipient of the European Micro Aerial Vehicle Competition
(EMAV’09) (second place) and two awards from International Aerial Robot
Competition (IARC’14) as a Team Member, the Best Student Paper Award
as first author for MFI 2012 (IEEE International Conference on Multisensor
Fusion and Information Integration), the Best Paper Award in Information
for IEEE International Conference on Information and Automation (ICIA
2013) as first author, the Best Paper Award Finalists as co-author, the Best
RoboCup Paper Award for IROS 2013 (IEEE/RSJ International Conference
on Intelligent Robots and Systems), the Best Conference Paper Award for
IEEE-CYBER 2015, the Best Student Paper Finalist for RCAR 2015 (IEEE
International conference on Real-time Computing and Robotics), the Best
Student Paper Finalist for ROBIO 2015, the Best Student Paper Award for
IEEE-ICAR 2017, the Best Paper in Automation Award for IEEE-ICIA 2017,
twice the innoviation contest Chunhui Cup Winning Award in 2012 and
Yuxiang Sun received the bachelor’s degree from 2013, and the Wu Wenjun AI Award in 2016. He was the Program Chair of
the Hefei University of Technology, Hefei, China, IEEERCAR 2016 and the Program Chair of International Robotics Conference
in 2009, the master’s degree from the University of in Foshan 2017. He was the Conference Chair of ICVS 2017. He has published
Science and Technology of China, Hefei, in 2012, many popular papers in top robotics journals including IEEE T RANSACTIONS
and the Ph.D. degree from The Chinese University of ON ROBOTICS , I NTERNATIONAL J OURNAL OF ROBOTICS R ESEARCH and
Hong Kong, Hong Kong, in 2017. He is currently a IEEE T RANSACTIONS ON AUTOMATION S CIENCE AND E NGINEERING. Dr.
Research Associate with the Robotics Institute, De- Liu is currently an Associate Editor for IEEE ROBOTICS AND AUTOMATION
partment of Electronic and Computer Engineering, L ETTERS. He is a Senior Member of IEEE.
The Hong Kong University of Science and Tech-
nology, Hong Kong. His current research interests
include mobile robots, autonomous vehicles, deep
learning, SLAM and navigation, motion detection, and so on.
Dr. Sun is a recipient of the Best Student Paper Finalist Award at the IEEE
ROBIO 2015.

Miaomiao Wang received his B.Sc. degree in con-


trol science and engineering from Huazhong Uni-
versity of Science and Technology, China, in 2013,
and his M.Sc. degree in Control Engineering from
Lakehead University, Canada, in 2015. He is cur-
rently a Ph.D student and a research assistant in the
department of Electrical and Computer Engineering,
Western University, Canada. He has received the
prestigious Ontario Graduate Scholarship (OGS) at
Western University, in 2018. His current research
interests include multiagent systems and geometric
estimation and control.
He has been an author of technical papers in flagship journals and
conference proceedings including IEEE T RANSACTIONS ON AUTOMATIC
C ONTROL, IEEE CDC and etc.

0018-9456 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

You might also like