MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset

School of Electrical and Electronic Engineering, Nanyang Technological University, School of Informatics, University of Edinburgh

MM-Fi got accepted by NeurIPS 2023!

Visualization of the Data of each modality and their corresponding estimations.


4D human perception plays an essential role in a myriad of applications, such as home automation, healthcare, and metaverse avatar simulation. Existing solutions, mainly relying on cameras and wearable devices, are either privacy-concerned or inconvenient to use. To address these issues, wireless sensing emerges, which leverages LiDAR, mmWave radar, and even WiFi signals for device-free human sensing.

In this paper, we propose the first multi-modal non-intrusive 4D human dataset, MM-Fi, with 25 daily or rehabilitation action categories to bridge the gap between wireless sensors and high-level human perception tasks. Our dataset consists of over 320k synchronized frames of five modalities from 40 human subjects. Various annotations are provided to support potential sensing tasks, e.g., human pose estimation and action recognition. Extensive experiments have been conducted to compare the sensing capacity of each or several modalities in terms of multiple tasks. It is hoped that MM-Fi can contribute to wireless sensing research with respect to action recognition, human pose estimation, multi-modal learning, cross-modal supervision, and interdisciplinary healthcare research

Sensor Platform

Interpolate start reference image.

Scale and Modalities

Interpolate start reference image.

Huamn Pose Estimation Result

Visualization of the human pose estimation result

We provide the human pose estimation result of each modality in the frame level.


Related Links

If you would like to utilize RGB images, please fill the MM-Fi Dataset Request Form.


MM-Fi is released under the CC BY-NC 4.0.


      title={MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless Sensing}, 
      author={Yang, Jianfei and Huang, He and Zhou, Yunjiao and Chen, Xinyan and Xu, Yuecong 
              and Yuan, Shenghai and Zou, Han and Lu, Chris, Xiaoxuan and Xie, Lihua},
      journal={arXiv preprint arXiv:2305.10345}