Xiang Zhang

and 7 more

Recent years have witnessed a growing interest in Wi-Fi-based gesture recognition. However, existing works have predominantly focused on closed-set paradigms, where all testing gestures are predefined during training. This poses a significant challenge in real-world applications, as unseen gestures might be misclassified as known classes during testing. To address this issue, we propose WiOpen, a robust Wi-Fi-based Open-Set Gesture Recognition (OSGR) framework. Implementing OSGR requires addressing challenges caused by the unique uncertainty in Wi-Fi sensing. This uncertainty, resulting from noise and domains, leads to widely scattered and irregular data distributions in collected Wi-Fi sensing data. Consequently, data ambiguity between classes and challenges in defining appropriate decision boundaries to identify unknowns arise. To tackle these challenges, WiOpen adopts a twofold approach to eliminate uncertainty and define precise decision boundaries. Initially, it addresses uncertainty induced by noise during data preprocessing by utilizing the CSI ratio. Next, it designs the OSGR network based on an uncertainty quantification method. Throughout the learning process, this network effectively mitigates uncertainty stemming from domains. Ultimately, the network leverages relationships among samples' neighbors to dynamically define open-set decision boundaries, successfully realizing OSGR. Comprehensive experiments on publicly accessible datasets confirm WiOpen's effectiveness. Code is available at https://github.com/purpleleaves007/WiOpen.

Xiang Zhang

and 4 more

Vital sign (breathing and heartbeat) monitoring is essential for patient care and sleep disease prevention. Most current solutions are based on wearable sensors or cameras; however, the former could affect sleep quality, while the latter often present privacy concerns. To address these shortcomings, we propose Wital, a contactless vital sign monitoring system based on low-cost and widespread commercial off-the-shelf (COTS) Wi-Fi devices. There are two challenges that need to be overcome. First, the torso deformations caused by breathing/heartbeats are weak. How can such deformations be effectively captured? Second, movements such as turning over affect the accuracy of vital sign monitoring. How can such detrimental effects be avoided? For the former, we propose a non-line-of-sight (NLOS) sensing model for modeling the relationship between the energy ratio of line-of-sight (LOS) to NLOS signals and the vital sign monitoring capability using Ricean K theory and use this model to guide the system construction to better capture the deformations caused by breathing/heartbeats. For the latter, we propose a motion segmentation method based on motion regularity detection that accurately distinguishes respiration from other motions, and we remove periods that include movements such as turning over to eliminate detrimental effects. We have implemented and validated Wital on low-cost COTS devices. The experimental results demonstrate the effectiveness of Wital in monitoring vital signs.