Multimodal Data Fusion of Spatial Fields in Sensor Networks

Page view(s)
Checked on Nov 24, 2023
Multimodal Data Fusion of Spatial Fields in Sensor Networks
Multimodal Data Fusion of Spatial Fields in Sensor Networks
Journal Title:
Publication Date:
13 January 2020
P. Zhang, G. W. Peters, I. Nevat, K. B. Teo and Y. Wang, "Multimodal Data Fusion of Spatial Fields in Sensor Networks," 2019 IEEE SENSORS, Montreal, QC, Canada, 2019, pp. 1-4, doi: 10.1109/SENSORS43011.2019.8956540.
We develop a robust data fusion algorithm for field reconstruction of multiple physical phenomena. The contribution of this paper is twofold: First, we demonstrate how multi-spatial fields which can have any marginal distributions and exhibit complex dependence structures can be constructed. Second, we develop an efficient and robust linear estimation algorithm to predict the mean behavior of the physical phenomena using rank correlation instead of the conventional linear Pearson correlation. Our approach has the advantage of avoiding the need to derive intractable predictive posterior distribution and also has a tractable solution for the rank correlation values. We show that our model outperforms the model which uses the conventional linear Pearson correlation metric in terms of the prediction mean-squared-errors (MSE). This provides the motivation for using our models for multimodal data fusion.
License type:
Funding Info:
This project is supported by the Government Technology Agency of Singapore (GovTech) and the National Research Foundation (NRF) under Translational R&D Grant (TRANS Grant) initiative (NRF2016IDM-TRANS001-062).
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Files uploaded: