Bao 2015
Bao 2015
Abstract— Wireless capsule endoscope (WCE) offers a A wireless endoscopic capsule is a swallowable micro-robot
noninvasive investigation of the entire small intestine, which other that equipped with a tiny visual sensor with an LED illuminat-
conventional wired endoscopic instruments can barely reach. As ing system for capturing images and a Radio Frequency (RF)
a critical component of the capsule endoscopic examination,
physicians need to keep track of the 3D trajectory that the transmission module for sending the images wirelessly to
capsule has traveled inside the lower abdomen to identify the the external receivers. Compared with the conventional
positions of the intestinal abnormalities after they are found colonoscopy or enteroscopy, WCE offers a patient-friendly,
by the video source. However, existing commercially available noninvasive and painless investigation of the entire small
radio frequency (RF)-based localization systems can only provide intestine where other wired video endoscopic instruments
inaccurate and discontinuous position estimation of the WCE
due to nonhomogeneity of body tissues and highly complicated can barely reach. However, one significant drawback of
distribution of the intestinal tube. In this paper, we present a this technology is that it cannot localize itself during its
hybrid localization technique, which takes advantage of data several hours journey inside the digestive system, particularly
fusion of multiple sensors inside the WCE, to enhance the posi- inside the small intestine whose length might vary between
tioning accuracy and construct the 3-D trajectory of the WCE. 5 to 9 meters. As a result, when an abnormality is detected
The proposed hybrid technique extracts motion information of
the WCE from the image sequences captured by the capsule’s by the video source, the physicians have limited idea where
embedded visual sensor and combines it with the RF signal the abnormality is located which prevents the following up
emitted by the wireless capsule, to simultaneously localize the therapeutic operations being executed immediately. Therefore,
WCE and mapping the path traveled by the WCE. Experimental having a precise localization system for the endoscopic
results show that the proposed hybrid algorithm is able to reduce capsule would greatly enhance the benefits of WCE [3], [4].
the average localization error from 6.8 cm to <2.3 cm of the
existing RF localization systems and a 3-D map can be precisely During the past few years, many attempts have been made
constructed to represent the position of the WCE inside small to develop accurate and reliable localization systems for the
intestine. WCE [2], [5]–[10]. The most commonly used localization
Index Terms— Wireless capsule endoscopy (WCE), micro-robot strategies include: through measuring the wireless RF signals
positioning, hybrid RF localization, data fusion, motion tracking. emitted from the capsule [2], [5], [6], by magnetic tracking
of a permanent magnet placed inside the capsule using
I. I NTRODUCTION external magnetic sensors [7], [8] and by localizing a mag-
netic sensor inside the capsule using external magnetic field
ranging metrics, such as Time of Arrival (ToA) and Received The rest of the paper is organized as follows: In section II,
Signal Strength (RSS) of the RF signal. However, inside we present the details of the proposed hybrid localization
human body is a non-homogenous and highly attenuating algorithm which includes how to extract motion information
environment [2], [16] where ToA and RSS are often poorly of the WCE by processing the endoscopic image sequence
related to the actual distance between the capsule and body and how to integrate the motion information with
mounted sensors. Second, there is no map for inside human RF measurements to enhance the localization accuracy as
body. A clear path of the small intestine is critical in defining well as constructing the trajectory that the WCE has traveled.
the positioning results. However, since the small intestine In section III, we describe the testbed for evaluating the
is up to 9 m long [17], [18] and it is twisted inside the performance of the proposed hybrid localization algorithm.
lower abdomen with highly complicated distribution [19], Both comparative results and analytical discussions are
it’s very difficult to construct the path which the WCE has given to verify the advantage of the proposed hybrid
traveled. Third, validation of existing localization algorithms localization technique over the conventional RF localization
are challenging. After the capsule is swallowed by the patient, systems. Finally, conclusion and future work are addressed
we have limited control of the endoscopic capsule [20]. in section IV.
Exploratory clinical procedures such as planar X-ray imaging
and Ultrasound cannot be easily used for verifying the posi-
II. F ORMULATION OF H YBRID L OCALIZATION
tioning results due to their high cost and potential risk to the
patients health [21]. Last but most importantly, operating any Theoretically, higher positioning accuracy can be achieved
experiment inside human body is extremely difficult. There by implementing hybrid solution as apposed to single source
are not only practical challenges to verify the performance of solutions [28]. For the video capsule endoscope, the only
localization algorithms, moreover, human subjects are different two data sources that are available for localization are the
from one and another, we need a uniform platform to do com- image sequences captured by the WCE’s embedded visual
parative performance evaluation for different algorithms [22]. sensor and the wireless RF signal used to transfer the
These challenges make deign of an accurate localization images. In the subsequent subsections, we explain how to
system for the WCE inside small intestine a unsolvable extract useful information from the endoscopic images and
engineering problem for more than 14 years. To enhance combine it with RF localization infrastructure to enhance the
the performance of the existing localization systems, more localization accuracy and construct the trajectory the WCE has
complicated hybrid localization algorithms that are able to passed.
integrate all possible data sources are needed [23], [24].
Since the only two data sources we can get out of the WCE
are the image sequences captured by the WCE’s embedded A. Motion Tracking Using Endoscopic Images
visual sensor and the wireless RF signal used to transfer the As we mentioned in the previous section, the endoscopic
images, an intuitive idea to formulate the hybrid solution is capsule keep taking pictures at short time interval as it travels,
through the combination of these two data sources. As we thus, it is possible to obtain motion information such as
mentioned above, the endoscopic capsule keeps taking pictures how quickly the capsule moves and the direction the capsule
as it travels (up to 6 frames / sec), it is possible to extract moves by analyzing the displace of the common portion of the
the motion information of the WCE by processing the video scene between consecutive image frames. In this section, we
stream [25]–[27]. present a novel motion tracking algorithm for the endoscopic
In this paper, we present a hybrid localization technique capsule by analyzing the displacements of unique portion of
that is able to combine the motion information extracted from the scene, which referred as feature points (FPs), between
the endoscopic images captured by the capsule’s embedded consecutive image frames. The proposed motion tracking
visual sensor with the existing RF localization infrastructure algorithm consists of 3 steps: feature points matching, image
to enhance the positioning accuracy. Meanwhile, our method is unrolling and quantitative calculation of motion parameters.
able to construct the 3D trajectory that the capsule has traveled Detailed procedure of each step is explained in the upcoming
which can be used for identifying the positions of the detected subsections.
abnormalities. The major contributions of this paper are: 1) Feature Points Detection: The purpose of feature point
• We explored the feasibility of using images to track detection is to track the transformations such as rotation,
the motion of the endoscopic capsule and we evaluated translation, and scaling between frames to reflect the motion
the potential of combining the motion information with of the capsule. Since the endoscopic images suffer from illu-
existing RF localization infrastructure to enhance the mination variations and geometric distortions, it’s very impor-
localization accuracy as well as mapping inside the small tant that the FPs extracted from the reference frame can be
intestine. accurately detected in the following frames. According to the
• We designed a testbed for performance evaluation of literature [29], [30], more FPs can be more accurately located
hybrid localization algorithms that benefits from content by the Affine Scale Invariant Feature Transform (ASIFT)
of the endoscopic images as well as the features of the algorithms compared to other methodologies when applied to
RF signal emitted from the video capsule. We used this WCE images. The ASIFT descriptor is defined by the affine
testbed to demonstrate the effectiveness of the proposed camera model in Eq. 1, which is a perfect matching tool for
hybrid localization algorithm inside the small intestine. the WCE images due to its immune property to viewpoint
BAO et al.: HYBRID LOCALIZATION OF MICROROBOTIC ENDOSCOPIC CAPSULE INSIDE SMALL INTESTINE 2671
Fig. 1. Feature points used for motion detection. (a) Feature points matching between consecutive frames. (b) Motion vectors by linking corresponding
feature points.
changes, blur, noise and spatial deformations. in Fig. 2, given a point P at distance d away from the camera,
the angler depth of P is defined as:
A = Hλ R1 ()Tt R2 ()
cos −si n t 0 cos −si n −1 R
=λ (1) θ = tan (2)
si n cos t 1 si n cos d
where R represents rotation and T represents tilt. is where R represents the radius of the intestinal tube. It can be
rotation angle of camera around optical axis. is longitude seen from Eq. 2 that a smaller angler depth indicates a larger
angle between optical axis and a fixed vertical plane. λ is distance away from the camera. To facilitate the derivation of
zoom parameter. Detailed procedure of FPs matching using angler depth, we map the coordinate (x, y) of any point on the
ASIFT can be found in [31]. An example of feature points cylindrical image plane to the unrolled image plane (x , y ) by:
matching is given in Fig. 1 (a), in which blue “O” represents Lφ
the coordinates of detected FPs in the reference frame and x = y = r (3)
2π
red “” represents the coordinates of matched FPs on the
second frame. If we link the FP pairs on the same frame where φ is the angle between point P and the horizontal axis
in the cylindrical image plane (shown in Fig. 3 (a)).
(as shown in Fig. 1 (b)), motion vectors, which represent
the displacements between frames, will be generated. The y − y0
φ = tan −1 (4)
magnitudes and distribution of these motion vectors reflect x − x0
the motions of the endoscopic capsule during that certain
r is the radius of the circular ring associated with point P that
time interval. Note that it is not necessary that all the feature
can be calculated by:
points are correctly matched. Bad matches are filtered out
by applying threshold on the length of the feature vector r = (x − x 0 )2 + (y − y0 )2 . (5)
compared with its neighboring feature vectors.
2) Image “Unrolling” for Motion Detection: To standardize L and H are length and height of the unrolled image
the displacement of each FP pair and facilitate the quantitative plane respectively. Fig. 3 illustrates the procedure of image
calculations of motion parameters that are useful for localiza- unrolling.
tion, we need to perform an inverse cylindrical projection [32] In this unrolled image plane, x axis represents the radian
(also referred as “image unrolling” in [33]) to project the angle φ whose value ranges from 0 (when x = 0) to 2π (when
original cylindrical image onto an flatten view coordinate x = L). y axis represents angular depth which reflect the
system, which we called “unrolled” image domain. As shown distance away from the camera. y = 0 represents a 0 angular
2672 IEEE SENSORS JOURNAL, VOL. 15, NO. 5, MAY 2015
1 x i
N
γ = 2π (17)
Fig. 5. Direction of moving of the capsule. N L
i=0
⎡ ⎤
cosαcosγ cosγ si nαsi nβ − cosαsi nγ cosαcosγ si nβ − si nαsi nγ
Rt = ⎣ cosβsi nγ cosαcosγ + si nαsi nβsi nγ −cosγ si nα + cosαsi nβsi nγ ⎦ (13)
−si nβ cosβsi nα cosαcosβ
2674 IEEE SENSORS JOURNAL, VOL. 15, NO. 5, MAY 2015
Fig. 6. A complete flowchart of data fusion of images and RF measurements using a Kalman filter.
state at time t − 1 to the current motion state at time t. If we equals 10cm and α is the path loss gradient which is deter-
plug in all the parameters, Eq. 18 can be rewritten as: mined by the propagation environment. The parameters of the
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ path loss model are summarized in Table II.
xt 1 0 0 vt 0 0 x t −1
⎢ yt ⎥ ⎢0 1 0 Let (x, y, z) be the potential position of the capsule and
⎢ ⎥ ⎢ 0 vt 0 ⎥ ⎢
⎥ ⎢ yt −1 ⎥
⎥
⎢ z t ⎥ ⎢0 0 1 (x i , yi , z i ) be the position of body mounted sensor i . The
⎢ ⎥ ⎢ 0 0 ⎥ ⎢
vt ⎥ ⎢ z t −1 ⎥ ⎥
⎢n x|t ⎥ = ⎢0 0 0 ⎥ · ⎢n x|t −1 ⎥ (19) distance between the capsule and sensors can be expressed as:
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣n y|t ⎦ ⎣0 0 0 R ⎦ ⎣n y|t −1 ⎦ L p (d) − L p (d0 )
n z|t 0 0 0 n z|t −1 di = 10 d0 (23)
10α
where v is the transition speed of the capsule derived from Given 3 or more estimated distances between the capsule
Eq. 10. t is the time interval between frames (half a second). and body mounted sensors, the 3D position of the capsule z t
R is the same rotation matrix introduced in Eq. 11. can be calculated using a least square algorithm by minimizing
We can use the motion state to predict the upcoming the function below:
RF localization z t by:
N
2
−
z t = Ht · m
t + νt (20) f (x, y, z) = (x − x i )2 + (y − yi )2 + (z − z i )2 − di2
i=1
where ν t is a measurements noise term. Similar to ω t , ν t also (24)
followed a normal distribution with covariance equal to R.
H is a 3 × 6 matrix which predicts the RF localization based 3) Correction Using RF Localization: After the actual
on the prior motion state at time t. RF localization z t is obtained, we can use the priori motion
⎡ ⎤ estimate m− t and a weighted difference between the actual
1 0 0 0 0 0 RF measurement z t and the predicted RF measurement z t to
H = ⎣0 1 0 0 0 0⎦ (21) correct the localization results.
0 0 1 0 0 0
Once the actual RF localization result z t is available, we mt = m− t + K t zt −
zt (25)
can use it to correct the predicted position of the capsule. where mt is defined as a posteriori motion state estimate given
2) RF Localization Become Available: To obtain the actual the RF measurement z t . The 3×6 matrix K in Eq. 25 is called
RF measurement z t , a bunch of calibrated external antennas Kalman gain. If we define the priori estimate errors covari-
are attached to the anterior abdominal wall of the human ance as Pt− = E[(mt − m− − T
t )(mt − mt ) ] and a posteriori
body to detect the wireless signal emitted by the wireless estimate errors covariance as Pt = E[(mt − mt )(mt − mt )T ],
capsule [18]. The power of received signal (RSS) is used to the Kalman Gain can be expressed as:
identify the distance between the capsule and body mounted
−1
sensors using statistical channel models. The channel model K t = Pt− H T H Pt− H T + R (26)
we used in this paper was developed by National Institute of
Standards and Technology (NIST) at MICS band [34]: The Kalman Gain controls the weighs of both sensors on the
d final position estimation: if RF measurement noise is low, then
L p (d) = L p (d0 ) + 10αlog10 + S(d > d0 ) (22) the final estimation is more dependent on the RF measurement.
d0 Otherwise, the final estimation is more dependent on the
where L p (d) represents the path loss in dB at some distance d motion model. The whole process of data fusion is illustrated
between the transmitter and receiver, d0 is a threshold distance in Fig. 6.
BAO et al.: HYBRID LOCALIZATION OF MICROROBOTIC ENDOSCOPIC CAPSULE INSIDE SMALL INTESTINE 2675
Fig. 7. Emulation testbed set up. (a) Human’s digestive system. (b) Path
for small intestine. (c) Virtual testbed for small intestine. (d) Real endoscopic
images. (e) Emulated endoscopic images.
Fig. 9. Localization results of different algorithms and performance evaluation. (a) Motion tracking results. (b) RF localization results. (c) Proposed hybrid
localization results and 3D trajectory reconstruction. (d) Evolution of localization error as the capsule moves.
IV. C ONCLUSION
In this paper, we presented a hybrid localization technique
that utilizes camera motion tracking algorithm to aid the exist-
ing RF localization infrastructure for the WCE application.
The major contribution of this work is that we demonstrated
the potential of using video source to aid the RF localization
of the WCE. The proposed motion tracking technique is purely
based on the image sequence that captured by the video camera
which is already equipped on the capsule, thus, no extra system
components such as IMUs or magnetic coils are needed. The
performance of the proposed method is validated under a
virtual emulation environment. Experimental results show that
by combining the motion information with RF measurements,
the proposed hybrid localization algorithm is able to provide
accurate, smooth and continuous localization results that meet
Fig. 10. Error distributions of different algorithms. the requirement of WCE application. In the future, we will
focus on refining this algorithm according to the clinical data
and testing this algorithm with real human object.
ACKNOWLEDGMENT
The authors would like to thank Dr. D. Cave with the
UMass Memorial Medical Center for his precious suggestions,
and the colleagues at the CWINS laboratory for their directly
or indirectly help in preparation of the results presented in
this paper.
R EFERENCES
[1] K. Pahlavan, X. Li, and J.-P. Makela, “Indoor geolocation science
and technology,” IEEE Commun. Mag., vol. 40, no. 2, pp. 112–118,
Feb. 2002.
[2] K. Pahlavan et al., “RF localization for wireless video capsule
endoscopy,” Int. J. Wireless Inf. Netw., vol. 19, no. 4, pp. 326–340,
2012.
[3] G. Ciuti, A. Menciassi, and P. Dario, “Capsule endoscopy: From current
achievements to open challenges,” IEEE Rev. Biomed. Eng., vol. 4,
pp. 59–72, Oct. 2011.
Fig. 11. Performance evaluation by CDF plot of different algorithms. [4] L. R. Fisher and W. L. Hasler, “New vision in video capsule endoscopy:
Current status and future directions,” Nature Rev. Gastroenterol.
Hepatol., vol. 9, no. 7, pp. 392–405, 2012.
previous measurements. Therefore, the localization error [5] Y. Ye, P. Swar, K. Pahlavan, and K. Ghaboosi, “Accuracy of RSS-based
would not accumulate as the capsule moves along (shown RF localization in multi-capsule endoscopy,” Int. J. Wireless Inf. Netw.,
vol. 19, no. 3, pp. 229–238, 2012.
in Fig. 9(d)). [6] M. Pourhomayoun, Z. Jin, and M. Fowler, “Accurate localization of
Finally we evaluated the performance of the proposed in-body medical implants based on spatial sparsity,” IEEE Trans.
hybrid localization algorithm. The results are shown in purple Biomed. Eng., vol. 61, no. 2, pp. 590–597, 2013.
[7] C. Hu, M. Q. Meng, and M. Mandal, “Efficient magnetic localization
line in Fig. 9 (c). It shows after the combination of motion and orientation technique for capsule endoscopy,” Int. J. Inf. Acquisition,
tracking and RF signals, the hybrid localization is able to vol. 2, no. 1, pp. 23–36, 2005.
achieve more continuous position estimation of the capsule and [8] M. Salerno et al., “A discrete-time localization method for capsule
endoscopy based on on-board magnetic sensing,” Meas. Sci. Technol.,
the reconstructed path that the capsule has traveled matches vol. 23, no. 1, p. 015701, 2012.
the ground truth path of the small intestine very well. From [9] F. Carpi, S. Galbiati, and A. Carpi, “Controlled navigation of endoscopic
Fig. 9 (d) we can see that, compared with the existing RSS capsules: Concept and preliminary experimental investigations,” IEEE
Trans. Biomed. Eng., vol. 54, no. 11, pp. 2028–2036, Nov. 2007.
based localization system, the localization error of hybrid [10] F. Carpi, N. Kastelein, M. Talcott, and C. Pappone, “Magnetically
localization stays stable at a very low level (2.3 cm on controllable gastrointestinal steering of video capsules,” IEEE Trans.
average) and the error would not increase as the capsule moves Biomed. Eng., vol. 58, no. 2, pp. 231–234, Feb. 2011.
[11] X. Wang, M. Q.-H. Meng, and C. Hu, “A localization method using
along. The error distribution and CDF plot of the above three 3-axis magnetoresistive sensors for tracking of capsule endoscope,”
algorithms are given in Fig. 10 and Fig. 11, respectively. From in Proc. 28th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBS),
both statistical plots we can see that the localization accuracy Aug./Sep. 2006, pp. 2522–2525.
[12] T. D. Than, G. Alici, H. Zhou, and W. Li, “A review of localization
of the proposed hybrid localization is much better than the systems for robotic endoscopic capsules,” IEEE Trans. Biomed. Eng.,
traditional RSS based RF localization. Since the diameter of vol. 59, no. 9, pp. 2387–2399, Sep. 2012.
small intestine is approximately 2.5 to 3 cm, the localization [13] H. Jacob, M. Frisch, D. Levy, A. Glukhovsky, R. Shreiber, and
S. Adler, “Localization of the given M2A ingestible capsule in the
accuracy that the proposed hybrid localization provides meets given diagnostic imaging system,” Amer. J. Gastroenterol., vol. 96, no. 9,
the requirement of WCE application. pp. S106–S107, 2001.
2678 IEEE SENSORS JOURNAL, VOL. 15, NO. 5, MAY 2015
[14] K. W. Cheung, H. C. So, W.-K. Ma, and Y. T. Chan, “Least squares [34] K. Sayrafian-Pour, W.-B. Yang, J. Hagedorn, J. Terrill, and
algorithms for time-of-arrival-based mobile location,” IEEE Trans. K. Y. Yazdandoost, “A statistical path loss model for medical implant
Signal Process., vol. 52, no. 4, pp. 1121–1130, Apr. 2004. communication channels,” in Proc. IEEE 20th Int. Symp. Pers., Indoor
[15] S. Li, Y. Geng, J. He, and K. Pahlavan, “Analysis of three-dimensional Mobile Radio Commun., Sep. 2009, pp. 2995–2999.
maximum likelihood algorithm for capsule endoscopy localization,” [35] G. Bao, L. Mi, and K. Pahlavan, “Emulation on motion tracking
in Proc. 5th IEEE Int. Conf. Biomed. Eng. Inf. (BMEI), Oct. 2012, of endoscopic capsule inside small intestine,” in Proc. World Congr.
pp. 721–725. Comput. Sci., Comput. Eng., Appl. Comput., Las Vegas, NV, USA, 2013.
[16] A. Alomainy and Y. Hao, “Modeling and characterization of biotelemet- [36] L. France et al., “A layered model of a virtual human intestine for
ric radio channel from ingested implants considering organ contents,” surgery simulation,” Med. Image Anal., vol. 9, no. 2, pp. 123–132, 2005.
IEEE Trans. Antennas Propag., vol. 57, no. 4, pp. 999–1005, Apr. 2009. [37] L. Mi, G. Bao, and K. Pahlavan, “Design and validation of a virtual
[17] D. O. Faigel and D. R. Cave, Capsule Endoscopy. Amsterdam, environment for experimentation inside the small intestine,” in Proc.
The Netherlands: Elsevier, 2008. 8th Int. Conf. Body Area Netw., 2013, pp. 35–40.
[18] F. D. Iorio et al., “Intestinal motor activity, endoluminal motion and [38] S. Seshamani, W. Lau, and G. Hager, “Real-time endoscopic mosaick-
transit,” Neurogastroenterol. Motility, vol. 21, no. 12, pp. 1264–e119, ing,” in Proc. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI),
Dec. 2009. 2006, pp. 355–363.
[19] G. Bao, Y. Ye, U. Khan, X. Zheng, and K. Pahlavan, “Modeling of the [39] P. M. Szczypiński, R. D. Sriram, P. V. J. Sriram, and D. N. Reddy,
movement of the endoscopy capsule inside gi tract based on the captured “A model of deformable rings for interpretation of wireless capsule
endoscopic images,” in Proc. Int. Conf. Modeling, Simulation Visualizat. endoscopic videos,” Med. Image Anal., vol. 13, no. 2, pp. 312–324,
Methods, Jul. 2012. Apr. 2009.
[20] G. Bao and K. Pahlavai, “Motion estimation of the endoscopy capsule
using region-based Kernel SVM classifier,” in Proc. IEEE Int. Conf.
Electro Inf. Technol., May 2013, pp. 1–5.
[21] N. Marya, A. Karellas, A. Foley, A. Roychowdhury, and D. Cave,
“Computerized 3-dimensional localization of a video capsule in the Guanqun Bao received the Ph.D. degree in
abdominal cavity: Validation by digital radiography,” Gastrointestinal
electrical and computer engineering from the
Endoscopy, vol. 79, no. 4, pp. 669–674, 2014. Worcester Polytechnic Institute, Worcester, MA,
[22] J. He, Y. Geng, Y. Wan, S. Li, and K. Pahlavan, “A cyber physical USA, in 2014, the B.S. degree in information
test-bed for virtualization of RF access environment for body sensor engineering from Zhejiang University, Hangzhou,
network,” IEEE Sensors J., vol. 13, no. 10, pp. 3826–3836, Oct. 2013.
China, in 2008, and the M.S. degree in electrical
[23] K. Pahlavan, G. Bao, and M. Liang, “Body-SLAM: Simultaneous engineering from the University of Toledo, Toledo,
localization and mapping inside the human body,” in Proc. Keynote OH, USA, in 2011. His current research includes
Speech, 8th Int. Conf. Body Area Netw. (BodyNets), vol. 2. Boston, MA, body area network, hybrid localization, Wi-Fi
USA, Sep./Oct. 2013. localization, and biomedical image processing.
[24] G. Bao, “On simultaneous localization and mapping inside the human
body (body-SLAM),” Ph.D. dissertation, Dept. Elect. Comput. Eng.,
Worcester Polytechnic Inst., Worcester, MA, USA, 2014.
[25] G. Silveira, E. Malis, and P. Rives, “An efficient direct approach
to visual SLAM,” IEEE Trans. Robot., vol. 24, no. 5, pp. 969–979,
Oct. 2008.
[26] J. A. Castellanos, J. Neira, and J. D. Tardós, “Multisensor fusion Kaveh Pahlavan is currently a Professor of
for simultaneous localization and map building,” IEEE Trans. Robot. Electrical and Computer Engineering, a Professor
Autom., vol. 17, no. 6, pp. 908–914, Dec. 2001. of Computer Science, and the Director of the
[27] A. J. Davison and D. W. Murray, “Simultaneous localization and map- Center for Wireless Information Network Studies,
building using active vision,” IEEE Trans. Pattern Anal. Mach. Intell., Worcester Polytechnic Institute, Worcester, MA,
vol. 24, no. 7, pp. 865–880, Jul. 2002. USA, and Chief Technical Advisor at Skyhook
[28] L. Jetto, S. Longhi, and G. Venturini, “Development and experimental Wireless, Boston, MA, USA. His current area of
validation of an adaptive extended Kalman filter for the localization of research is opportunistic localization for body area
mobile robots,” IEEE Trans. Robot. Autom., vol. 15, no. 2, pp. 219–229, networks and robotics applications.
Apr. 1999.
[29] Y. Fan and M. Q.-H. Meng, “3D reconstruction of the WCE images
by affine SIFT method,” in Proc. 9th World Congr. Intell. Control
Autom. (WCICA), Jun. 2011, pp. 943–947.
[30] H.-G. Lee, M.-K. Choi, and S.-C. Lee, “Motion analysis for duplicate
frame removal in wireless capsule endoscope,” Proc. SPIE, vol. 7962,
Liang Mi is currently pursuing the master’s degree
p. 79621T, Mar. 2011.
at the Department of Electrical and Computer Engi-
[31] J.-M. Morel and G. Yu, “ASIFT: A new framework for fully affine invari-
neering, Worcester Polytechnic Institute, Worcester,
ant image comparison,” SIAM J. Imag. Sci., vol. 2, no. 2, pp. 438–469,
MA, USA. He received the B.S. degree in remote
Apr. 2009.
sensing science and technology from the Harbin
[32] T. Tillo, E. Lim, Z. Wang, J. Hang, and R. Qian, “Inverse projection
Institute of Technology, Harbin, China. His previ-
of the wireless capsule endoscopy images,” in Proc. Int. Conf. Biomed.
ous research interests include remote sensing image
Eng. Comput. Sci. (ICBECS), Apr. 2010, pp. 1–4.
processing. His current research interest is environ-
[33] S. Sathyanarayana, S. Thambipillai, and C. T. Clarke, “Real time
ment emulation for body area networks.
tracking of camera motion through cylindrical passages,” in Proc. IEEE
15th Int. Conf. Digit. Signal Process., Jul. 2007, pp. 455–458.