Liu, S., Chen, Z., Wu, M., Wang, H., Xing, B., & Chen, L. (2023). Generalizing Wireless Cross-Multiple-Factor Gesture Recognition to Unseen Domains. IEEE Transactions on Mobile Computing, 1–14. https://doi.org/10.1109/tmc.2023.3301501
Abstract:
Cross-domain wireless sensing has always been challenging due to the sensitivity of wireless signals to various environmental factors, which we refer to as subdomains. However, current efforts are limited to cross-one-subdomain tasks requiring target domain data for model training or multiple receivers for data collection. Taking common gesture recognition as an application example, we attempt to demonstrate the feasibility of cross-multiple-subdomain wireless sensing in the more challenging domain generalization (DG) setting. It is possible to extract domain-invariant features from one or several source domain(s), thereby avoiding the need for multiple receivers or target domain data. We also propose an intelligent wireless data augmentation technique based on subdomain-guided perturbations, named WiSGP. Specifically, the independent domain model generates perturbations in the direction of the largest subdomain variations. Then, these subdomain-guided perturbations augment the gesture model's input to enable better domain-invariant feature extraction, even when various subdomains interact. Similarly, gesture-guided perturbations augment the domain model's input, resulting in more accurate subdomain-guided perturbations and minimal gesture label changes. Extensive experiments have been conducted on three datasets collected from various NICs. In terms of room, location, orientation, and user subdomains, WiSGP exhibits excellent accuracy, generalizability, and portability for both cross-one-subdomain and cross-multiple-subdomain tasks.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - Career Development Award
Grant Reference no. : C210112046
This work was supported in part by the National Natural
Science Foundation of China under Grant 62072319, in part by Sichuan Science and Technology Program under Grants 2023YFQ0022 and 2022YFG0041, in part by Luzhou Science and Technology Innovation R&D Program under Grant 2022CDLZ-6, and in part by (Sichuan
University-Luzhou) Scientific and Technological Innovation
R&D Project under Grant 2021CDLZ-11.