Sensor fusion
Sensor fusion is the process of combining sensor
data or data derived from disparate sources such
that the resulting information has less uncertainty
than would be possible when these sources were
used individually. For instance, one could
potentially obtain a more accurate location
estimate of an indoor object by combining
multiple data sources such as video cameras and
WiFi localization signals. The term uncertainty
reduction in this case can mean more accurate,
more complete, or more dependable, or refer to Eurofighter sensor fusion
the result of an emerging view, such as
stereoscopic vision (calculation of depth
information by combining two-dimensional images from two cameras at slightly different viewpoints).[1][2]
The data sources for a fusion process are not specified to originate from identical sensors. One can
distinguish direct fusion, indirect fusion and fusion of the outputs of the former two. Direct fusion is the
fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values
of sensor data, while indirect fusion uses information sources like a priori knowledge about the
environment and human input.
Sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.
Examples of sensors
Accelerometers
Electronic Support Measures (ESM)
Flash LIDAR
Global Positioning System (GPS)
Infrared / thermal imaging camera
Magnetic sensors
MEMS
Phased array
Radar
Radiotelescopes, such as the proposed Square Kilometre Array, the largest sensor ever to
be built
Scanning LIDAR
Seismic sensors
Sonar and other acoustic
Sonobuoys
TV cameras
→Additional List of sensors
Algorithms
Sensor fusion is a term that covers a number of methods and algorithms, including:
Kalman filter[3]
Bayesian networks
Dempster–Shafer
Convolutional neural network
Gaussian processes [4][5]
Example calculations
Two example sensor fusion calculations are illustrated below.
Let and denote two sensor measurements with noise variances and , respectively. One way of
obtaining a combined measurement is to apply inverse-variance weighting, which is also employed
within the Fraser-Potter fixed-interval smoother, namely [6]
where is the variance of the combined estimate. It can be seen that the fused result is simply
a linear combination of the two measurements weighted by their respective noise variances.
Another method to fuse two measurements is to use the optimal Kalman filter. Suppose that the data is
generated by a first-order system and let denote the solution of the filter's Riccati equation. By applying
Cramer's rule within the gain calculation it can be found that the filter gain is given by:
By inspection, when the first measurement is noise free, the filter ignores the second measurement and vice
versa. That is, the combined estimate is weighted by the quality of the measurements.
Centralized versus decentralized
In sensor fusion, centralized versus decentralized refers to where the fusion of the data occurs. In
centralized fusion, the clients simply forward all of the data to a central location, and some entity at the
central location is responsible for correlating and fusing the data. In decentralized, the clients take full
responsibility for fusing the data. "In this case, every sensor or platform can be viewed as an intelligent
asset having some degree of autonomy in decision-making."[7]
Multiple combinations of centralized and decentralized systems exist.
Another classification of sensor configuration refers to the coordination of information flow between
sensors.[8][9] These mechanisms provide a way to resolve conflicts or disagreements and to allow the
development of dynamic sensing strategies. Sensors are in redundant (or competitive) configuration if each
node delivers independent measures of the same properties. This configuration can be used in error
correction when comparing information from multiple nodes. Redundant strategies are often used with high
level fusions in voting procedures.[10][11] Complementary configuration occurs when multiple information
sources supply different information about the same features. This strategy is used for fusing information at
raw data level within decision-making algorithms. Complementary features are typically applied in motion
recognition tasks with Neural network,[12][13] Hidden Markov model,[14][15] Support-vector machine,[16]
clustering methods and other techniques.[16][15] Cooperative sensor fusion uses the information extracted
by multiple independent sensors to provide information that would not be available from single sensors. For
example, sensors connected to body segments are used for the detection of the angle between them.
Cooperative sensor strategy gives information impossible to obtain from single nodes. Cooperative
information fusion can be used in motion recognition,[17] gait analysis, motion analysis,[18][19],.[20]
Levels
There are several categories or levels of sensor fusion that are commonly used.* [21] [22] [23] [24] [25] [26]
Level 0 – Data alignment
Level 1 – Entity assessment (e.g. signal/feature/object).
Tracking and object detection/recognition/identification
Level 2 – Situation assessment
Level 3 – Impact assessment
Level 4 – Process refinement (i.e. sensor management)
Level 5 – User refinement
Sensor fusion level can also be defined basing on the kind of information used to feed the fusion
algorithm.[27] More precisely, sensor fusion can be performed fusing raw data coming from different
sources, extrapolated features or even decision made by single nodes.
Data level - data level (or early) fusion aims to fuse raw data from multiple sources and
represent the fusion technique at the lowest level of abstraction. It is the most common
sensor fusion technique in many fields of application. Data level fusion algorithms usually
aim to combine multiple homogeneous sources of sensory data to achieve more accurate
and synthetic readings.[28] When portable devices are employed data compression
represent an important factor, since collecting raw information from multiple sources
generates huge information spaces that could define an issue in terms of memory or
communication bandwidth for portable systems. Data level information fusion tends to
generate big input spaces, that slow down the decision-making procedure. Also, data level
fusion often cannot handle incomplete measurements. If one sensor modality becomes
useless due to malfunctions, breakdown or other reasons the whole systems could occur in
ambiguous outcomes.
Feature level - features represent information computed on board by each sensing node.
These features are then sent to a fusion node to feed the fusion algorithm.[29] This procedure
generates smaller information spaces with respect to the data level fusion, and this is better
in terms of computational load. Obviously, it is important to properly select features on which
to define classification procedures: choosing the most efficient features set should be a main
aspect in method design. Using features selection algorithms that properly detect correlated
features and features subsets improves the recognition accuracy but large training sets are
usually required to find the most significant feature subset.[27]
Decision level - decision level (or late) fusion is the procedure of selecting an hypothesis
from a set of hypotheses generated by individual (usually weaker) decisions of multiple
nodes.[30] It is the highest level of abstraction and uses the information that has been
already elaborated through preliminary data- or feature level processing. The main goal in
decision fusion is to use meta-level classifier while data from nodes are preprocessed by
extracting features from them.[31] Typically decision level sensor fusion is used in
classification an recognition activities and the two most common approaches are majority
voting and Naive-Bayes. Advantages coming from decision level fusion include
communication bandwidth and improved decision accuracy. It also allows the combination
of heterogeneous sensors.[29]
Applications
One application of sensor fusion is GPS/INS, where Global Positioning System and inertial navigation
system data is fused using various different methods, e.g. the extended Kalman filter. This is useful, for
example, in determining the attitude of an aircraft using low-cost sensors.[32] Another example is using the
data fusion approach to determine the traffic state (low traffic, traffic jam, medium flow) using road side
collected acoustic, image and sensor data.[33] In the field of autonomous driving, sensor fusion is used to
combine the redundant information from complementary sensors in order to obtain a more accurate and
reliable representation of the environment.[34]
Although technically not a dedicated sensor fusion method, modern Convolutional neural network based
methods can simultaneously process many channels of sensor data (such as Hyperspectral imaging with
hundreds of bands [35]) and fuse relevant information to produce classification results.
See also
Brooks – Iyengar algorithm
Data (computing)
Data mining
Fisher's method for combining independent tests of significance
Image fusion
Multimodal integration
Sensor grid
Transducer Markup Language (TML) is an XML based markup language which enables
sensor fusion.
References
1. Elmenreich, W. (2002). Sensor Fusion in Time-Triggered Systems, PhD Thesis (https://mobil
e.aau.at/~welmenre/papers/elmenreich_Dissertation_sensorFusionInTimeTriggeredSystem
s.pdf) (PDF). Vienna, Austria: Vienna University of Technology. p. 173.
2. Haghighat, Mohammad Bagher Akbari; Aghagolzadeh, Ali; Seyedarabi, Hadi (2011). "Multi-
focus image fusion for visual sensor networks in DCT domain". Computers & Electrical
Engineering. 37 (5): 789–797. doi:10.1016/j.compeleceng.2011.04.016 (https://doi.org/10.10
16%2Fj.compeleceng.2011.04.016). S2CID 38131177 (https://api.semanticscholar.org/Corp
usID:38131177).
3. Li, Wangyan; Wang, Zidong; Wei, Guoliang; Ma, Lifeng; Hu, Jun; Ding, Derui (2015). "A
Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks" (https://doi.org/
10.1155%2F2015%2F683701). Discrete Dynamics in Nature and Society. 2015: 1–12.
doi:10.1155/2015/683701 (https://doi.org/10.1155%2F2015%2F683701). ISSN 1026-0226
(https://www.worldcat.org/issn/1026-0226).
4. Badeli, Vahid; Ranftl, Sascha; Melito, Gian Marco; Reinbacher-Köstinger, Alice; Von Der
Linden, Wolfgang; Ellermann, Katrin; Biro, Oszkar (2021-01-01). "Bayesian inference of
multi-sensors impedance cardiography for detection of aortic dissection" (https://doi.org/10.1
108/COMPEL-03-2021-0072). COMPEL - the International Journal for Computation and
Mathematics in Electrical and Electronic Engineering. 41 (3): 824–839.
doi:10.1108/COMPEL-03-2021-0072 (https://doi.org/10.1108%2FCOMPEL-03-2021-0072).
ISSN 0332-1649 (https://www.worldcat.org/issn/0332-1649). S2CID 245299500 (https://api.s
emanticscholar.org/CorpusID:245299500).
5. Ranftl, Sascha; Melito, Gian Marco; Badeli, Vahid; Reinbacher-Köstinger, Alice; Ellermann,
Katrin; von der Linden, Wolfgang (2019-12-31). "Bayesian Uncertainty Quantification with
Multi-Fidelity Data and Gaussian Processes for Impedance Cardiography of Aortic
Dissection" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7516489). Entropy. 22 (1): 58.
doi:10.3390/e22010058 (https://doi.org/10.3390%2Fe22010058). ISSN 1099-4300 (https://w
ww.worldcat.org/issn/1099-4300). PMC 7516489 (https://www.ncbi.nlm.nih.gov/pmc/articles/
PMC7516489). PMID 33285833 (https://pubmed.ncbi.nlm.nih.gov/33285833).
6. Maybeck, S. (1982). Stochastic Models, Estimating, and Control. River Edge, NJ: Academic
Press.
7. N. Xiong; P. Svensson (2002). "Multi-sensor management for information fusion: issues and
approaches" (http://www.elsevier.com/locate/inffus). Information Fusion. p. 3(2):163–186.
8. Durrant-Whyte, Hugh F. (2016). "Sensor Models and Multisensor Integration". The
International Journal of Robotics Research. 7 (6): 97–113.
doi:10.1177/027836498800700608 (https://doi.org/10.1177%2F027836498800700608).
ISSN 0278-3649 (https://www.worldcat.org/issn/0278-3649). S2CID 35656213 (https://api.se
manticscholar.org/CorpusID:35656213).
9. Galar, Diego; Kumar, Uday (2017). eMaintenance: Essential Electronic Tools for Efficiency.
Academic Press. p. 26. ISBN 9780128111543.
10. Li, Wenfeng; Bao, Junrong; Fu, Xiuwen; Fortino, Giancarlo; Galzarano, Stefano (2012).
"Human Postures Recognition Based on D-S Evidence Theory and Multi-sensor Data
Fusion". 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid
Computing (ccgrid 2012). pp. 912–917. doi:10.1109/CCGrid.2012.144 (https://doi.org/10.110
9%2FCCGrid.2012.144). ISBN 978-1-4673-1395-7. S2CID 1571720 (https://api.semanticsc
holar.org/CorpusID:1571720).
11. Fortino, Giancarlo; Gravina, Raffaele (2015). "Fall-MobileGuard: a Smart Real-Time Fall
Detection System" (https://semanticscholar.org/paper/77d6b7209d5da2ae41fd10eea65e047
fe2264e7b). Proceedings of the 10th EAI International Conference on Body Area Networks.
doi:10.4108/eai.28-9-2015.2261462 (https://doi.org/10.4108%2Feai.28-9-2015.2261462).
ISBN 978-1-63190-084-6. S2CID 38913107 (https://api.semanticscholar.org/CorpusID:3891
3107).
12. Tao, Shuai; Zhang, Xiaowei; Cai, Huaying; Lv, Zeping; Hu, Caiyou; Xie, Haiqun (2018).
"Gait based biometric personal authentication by using MEMS inertial sensors". Journal of
Ambient Intelligence and Humanized Computing. 9 (5): 1705–1712. doi:10.1007/s12652-
018-0880-6 (https://doi.org/10.1007%2Fs12652-018-0880-6). ISSN 1868-5137 (https://www.
worldcat.org/issn/1868-5137). S2CID 52304214 (https://api.semanticscholar.org/CorpusID:5
2304214).
13. Dehzangi, Omid; Taherisadr, Mojtaba; ChangalVala, Raghvendar (2017). "IMU-Based Gait
Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion" (https://www.n
cbi.nlm.nih.gov/pmc/articles/PMC5750784). Sensors. 17 (12): 2735.
Bibcode:2017Senso..17.2735D (https://ui.adsabs.harvard.edu/abs/2017Senso..17.2735D).
doi:10.3390/s17122735 (https://doi.org/10.3390%2Fs17122735). ISSN 1424-8220 (https://w
ww.worldcat.org/issn/1424-8220). PMC 5750784 (https://www.ncbi.nlm.nih.gov/pmc/articles/
PMC5750784). PMID 29186887 (https://pubmed.ncbi.nlm.nih.gov/29186887).
14. Guenterberg, E.; Yang, A.Y.; Ghasemzadeh, H.; Jafari, R.; Bajcsy, R.; Sastry, S.S. (2009). "A
Method for Extracting Temporal Parameters Based on Hidden Markov Models in Body
Sensor Networks With Inertial Sensors" (http://www.eecs.berkeley.edu/~yang/paper/Guenter
bergE-Biomedicine.pdf) (PDF). IEEE Transactions on Information Technology in
Biomedicine. 13 (6): 1019–1030. doi:10.1109/TITB.2009.2028421 (https://doi.org/10.1109%
2FTITB.2009.2028421). ISSN 1089-7771 (https://www.worldcat.org/issn/1089-7771).
PMID 19726268 (https://pubmed.ncbi.nlm.nih.gov/19726268). S2CID 1829011 (https://api.se
manticscholar.org/CorpusID:1829011).
15. Parisi, Federico; Ferrari, Gianluigi; Giuberti, Matteo; Contin, Laura; Cimolin, Veronica;
Azzaro, Corrado; Albani, Giovanni; Mauro, Alessandro (2016). "Inertial BSN-Based
Characterization and Automatic UPDRS Evaluation of the Gait Task of Parkinsonians". IEEE
Transactions on Affective Computing. 7 (3): 258–271. doi:10.1109/TAFFC.2016.2549533 (htt
ps://doi.org/10.1109%2FTAFFC.2016.2549533). ISSN 1949-3045 (https://www.worldcat.org/
issn/1949-3045). S2CID 16866555 (https://api.semanticscholar.org/CorpusID:16866555).
16. Gao, Lei; Bourke, A.K.; Nelson, John (2014). "Evaluation of accelerometer based multi-
sensor versus single-sensor activity recognition systems". Medical Engineering & Physics.
36 (6): 779–785. doi:10.1016/j.medengphy.2014.02.012 (https://doi.org/10.1016%2Fj.meden
gphy.2014.02.012). ISSN 1350-4533 (https://www.worldcat.org/issn/1350-4533).
PMID 24636448 (https://pubmed.ncbi.nlm.nih.gov/24636448).
17. Xu, James Y.; Wang, Yan; Barrett, Mick; Dobkin, Bruce; Pottie, Greg J.; Kaiser, William J.
(2016). "Personalized Multilayer Daily Life Profiling Through Context Enabled Activity
Classification and Motion Reconstruction: An Integrated System Approach". IEEE Journal of
Biomedical and Health Informatics. 20 (1): 177–188. doi:10.1109/JBHI.2014.2385694 (http
s://doi.org/10.1109%2FJBHI.2014.2385694). ISSN 2168-2194 (https://www.worldcat.org/iss
n/2168-2194). PMID 25546868 (https://pubmed.ncbi.nlm.nih.gov/25546868).
S2CID 16785375 (https://api.semanticscholar.org/CorpusID:16785375).
18. Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo;
Monticone, Marco; Ferrante, Simona (2015). "A Novel Adaptive, Real-Time Algorithm to
Detect Gait Events From Wearable Sensors". IEEE Transactions on Neural Systems and
Rehabilitation Engineering. 23 (3): 413–422. doi:10.1109/TNSRE.2014.2337914 (https://doi.
org/10.1109%2FTNSRE.2014.2337914). ISSN 1534-4320 (https://www.worldcat.org/issn/15
34-4320). PMID 25069118 (https://pubmed.ncbi.nlm.nih.gov/25069118). S2CID 25828466
(https://api.semanticscholar.org/CorpusID:25828466).
19. Wang, Zhelong; Qiu, Sen; Cao, Zhongkai; Jiang, Ming (2013). "Quantitative assessment of
dual gait analysis based on inertial sensors with body sensor network". Sensor Review. 33
(1): 48–56. doi:10.1108/02602281311294342 (https://doi.org/10.1108%2F02602281311294
342). ISSN 0260-2288 (https://www.worldcat.org/issn/0260-2288).
20. Kong, Weisheng; Wanning, Lauren; Sessa, Salvatore; Zecca, Massimiliano; Magistro,
Daniele; Takeuchi, Hikaru; Kawashima, Ryuta; Takanishi, Atsuo (2017). "Step Sequence
and Direction Detection of Four Square Step Test" (http://irep.ntu.ac.uk/id/eprint/33502/1/109
99_Magistro.pdf) (PDF). IEEE Robotics and Automation Letters. 2 (4): 2194–2200.
doi:10.1109/LRA.2017.2723929 (https://doi.org/10.1109%2FLRA.2017.2723929).
ISSN 2377-3766 (https://www.worldcat.org/issn/2377-3766). S2CID 23410874 (https://api.se
manticscholar.org/CorpusID:23410874).
21. Rethinking JDL Data Fusion Levels (http://www.infofusion.buffalo.edu/tm/Dr.Llinas'stuff/Rethi
nking%20JDL%20Data%20Fusion%20Levels_BowmanSteinberg.pdf)
22. Blasch, E., Plano, S. (2003) “Level 5: User Refinement to aid the Fusion Process”,
Proceedings of the SPIE, Vol. 5099.
23. J. Llinas; C. Bowman; G. Rogova; A. Steinberg; E. Waltz; F. White (2004). Revisiting the JDL
data fusion model II. International Conference on Information Fusion.
CiteSeerX 10.1.1.58.2996 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.58.29
96).
24. Blasch, E. (2006) "Sensor, user, mission (SUM) resource management and their interaction
with level 2/3 fusion (http://www.iut-amiens.fr/~ricquebourg/these/fusion_2006/Papers/394.p
df)" International Conference on Information Fusion.
25. "Harnessing the full power of sensor fusion -" (http://defensesystems.com/articles/2009/09/0
2/c4isr1-sensor-fusion.aspx).
26. Blasch, E., Steinberg, A., Das, S., Llinas, J., Chong, C.-Y., Kessler, O., Waltz, E., White, F.
(2013) "Revisiting the JDL model for information Exploitation," International Conference on
Information Fusion.
27. Gravina, Raffaele; Alinia, Parastoo; Ghasemzadeh, Hassan; Fortino, Giancarlo (2017).
"Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges".
Information Fusion. 35: 68–80. doi:10.1016/j.inffus.2016.09.005 (https://doi.org/10.1016%2F
j.inffus.2016.09.005). ISSN 1566-2535 (https://www.worldcat.org/issn/1566-2535).
S2CID 40608207 (https://api.semanticscholar.org/CorpusID:40608207).
28. Gao, Teng; Song, Jin-Yan; Zou, Ji-Yan; Ding, Jin-Hua; Wang, De-Quan; Jin, Ren-Cheng
(2015). "An overview of performance trade-off mechanisms in routing protocol for green
wireless sensor networks". Wireless Networks. 22 (1): 135–157. doi:10.1007/s11276-015-
0960-x (https://doi.org/10.1007%2Fs11276-015-0960-x). ISSN 1022-0038 (https://www.worl
dcat.org/issn/1022-0038). S2CID 34505498 (https://api.semanticscholar.org/CorpusID:3450
5498).
29. Chen, Chen; Jafari, Roozbeh; Kehtarnavaz, Nasser (2015). "A survey of depth and inertial
sensor fusion for human action recognition". Multimedia Tools and Applications. 76 (3):
4405–4425. doi:10.1007/s11042-015-3177-1 (https://doi.org/10.1007%2Fs11042-015-3177-
1). ISSN 1380-7501 (https://www.worldcat.org/issn/1380-7501). S2CID 18112361 (https://ap
i.semanticscholar.org/CorpusID:18112361).
30. Banovic, Nikola; Buzali, Tofi; Chevalier, Fanny; Mankoff, Jennifer; Dey, Anind K. (2016).
"Modeling and Understanding Human Routine Behavior". Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems - CHI '16. pp. 248–260.
doi:10.1145/2858036.2858557 (https://doi.org/10.1145%2F2858036.2858557).
ISBN 9781450333627. S2CID 872756 (https://api.semanticscholar.org/CorpusID:872756).
31. Maria, Aileni Raluca; Sever, Pasca; Carlos, Valderrama (2015). "Biomedical sensors data
fusion algorithm for enhancing the efficiency of fault-tolerant systems in case of wearable
electronics device". 2015 Conference Grid, Cloud & High Performance Computing in
Science (ROLCG). pp. 1–4. doi:10.1109/ROLCG.2015.7367228 (https://doi.org/10.1109%2F
ROLCG.2015.7367228). ISBN 978-6-0673-7040-9. S2CID 18782930 (https://api.semanticsc
holar.org/CorpusID:18782930).
32. Gross, Jason; Yu Gu; Matthew Rhudy; Srikanth Gururajan; Marcello Napolitano (July 2012).
"Flight Test Evaluation of Sensor Fusion Algorithms for Attitude Estimation". IEEE
Transactions on Aerospace and Electronic Systems. 48 (3): 2128–2139.
Bibcode:2012ITAES..48.2128G (https://ui.adsabs.harvard.edu/abs/2012ITAES..48.2128G).
doi:10.1109/TAES.2012.6237583 (https://doi.org/10.1109%2FTAES.2012.6237583).
S2CID 393165 (https://api.semanticscholar.org/CorpusID:393165).
33. Joshi, V., Rajamani, N., Takayuki, K., Prathapaneni, N., Subramaniam, L. V. (2013).
Information Fusion Based Learning for Frugal Traffic State Sensing. Proceedings of the
Twenty-Third International Joint Conference on Artificial Intelligence.
34. Mircea Paul, Muresan; Ion, Giosan; Sergiu, Nedevschi (2020-02-18). "Stabilization and
Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic
Segmentation" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7070899). Sensors. 20 (4):
1110. Bibcode:2020Senso..20.1110M (https://ui.adsabs.harvard.edu/abs/2020Senso..20.11
10M). doi:10.3390/s20041110 (https://doi.org/10.3390%2Fs20041110). PMC 7070899 (http
s://www.ncbi.nlm.nih.gov/pmc/articles/PMC7070899). PMID 32085608 (https://pubmed.ncbi.
nlm.nih.gov/32085608).
35. Ran, Lingyan; Zhang, Yanning; Wei, Wei; Zhang, Qilin (2017-10-23). "A Hyperspectral
Image Classification Framework with Spatial Pixel Pair Features" (https://www.ncbi.nlm.nih.
gov/pmc/articles/PMC5677443). Sensors. 17 (10): 2421. Bibcode:2017Senso..17.2421R (htt
ps://ui.adsabs.harvard.edu/abs/2017Senso..17.2421R). doi:10.3390/s17102421 (https://doi.
org/10.3390%2Fs17102421). PMC 5677443 (https://www.ncbi.nlm.nih.gov/pmc/articles/PM
C5677443). PMID 29065535 (https://pubmed.ncbi.nlm.nih.gov/29065535).
External links
Discriminant Correlation Analysis (DCA) (https://github.com/mhaghighat/dcaFuse)[1]
International Society of Information Fusion (http://www.isif.org/)
1. Haghighat, Mohammad; Abdel-Mottaleb, Mohamed; Alhalabi, Wadee (2016). "Discriminant
Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition"
(https://zenodo.org/record/889881). IEEE Transactions on Information Forensics and
Security. 11 (9): 1984–1996. doi:10.1109/TIFS.2016.2569061 (https://doi.org/10.1109%2FTI
FS.2016.2569061). S2CID 15624506 (https://api.semanticscholar.org/CorpusID:15624506).
Retrieved from "https://en.wikipedia.org/w/index.php?title=Sensor_fusion&oldid=1163940742"