4D human perception plays an essential role in a myriad of applications, such as home automation, healthcare, and metaverse avatar simulation. Existing solutions, mainly relying on cameras and wearable devices, are either privacy-concerned or inconvenient to use. To address these issues, wireless sensing emerges, which leverages LiDAR, mmWave radar, and even WiFi signals for device-free human sensing.
In this paper, we propose the first multi-modal non-intrusive 4D human dataset, MM-Fi, with 25 daily or rehabilitation action categories to bridge the gap between wireless sensors and high-level human perception tasks. Our dataset consists of over 320k synchronized frames of five modalities from 40 human subjects. Various annotations are provided to support potential sensing tasks, e.g., human pose estimation and action recognition. Extensive experiments have been conducted to compare the sensing capacity of each or several modalities in terms of multiple tasks. It is hoped that MM-Fi can contribute to wireless sensing research with respect to action recognition, human pose estimation, multi-modal learning, cross-modal supervision, and interdisciplinary healthcare research
We provide the human pose estimation result of each modality in the frame level.
If you would like to utilize RGB images, please fill the MM-Fi Dataset Request Form.
MM-Fi is released under the CC BY-NC 4.0.
@inproceedings{ yang2023mm, title={MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless Sensing}, author={Yang, Jianfei and Huang, He and Zhou, Yunjiao and Chen, Xinyan and Xu, Yuecong and Yuan, Shenghai and Zou, Han and Lu, Chris Xiaoxuan and Xie, Lihua}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=1uAsASS1th} }
@article{ yang2023mmfi, title={MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless Sensing}, author={Yang, Jianfei and Huang, He and Zhou, Yunjiao and Chen, Xinyan and Xu, Yuecong and Yuan, Shenghai and Zou, Han and Lu, Chris, Xiaoxuan and Xie, Lihua}, journal={arXiv preprint arXiv:2305.10345} year={2023} }