SOTAVerified

Vi-Fi: Associating Moving Subjects across Vision and Wireless Sensors

2022-07-18ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN) 2022Code Available0· sign in to hype

Hansi Liu, Abrar Alali, Mohamed Ibrahim, Bryan Bo Cao, Nicholas Meegan, Hongyu Li, Marco Gruteser, Shubham Jain, Kristin Dana, Ashwin Ashok, Bin Cheng, HongSheng Lu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we present Vi-Fi, a multi-modal system that leverages a user’s smartphone WiFi Fine Timing Measurements (FTM) and inertial measurement unit (IMU) sensor data to associate the user detected on a camera footage with their corresponding smartphone identifier (e.g. WiFi MAC address). Our approach uses a recurrent multi-modal deep neural network that exploits FTM and IMU measurements along with distance between user and camera (depth information) to learn affinity matrices. As a baseline method for comparison, we also present a traditional non deep learning approach that uses bipartite graph matching. To facilitate evaluation, we collected a multi-modal dataset that comprises camera videos with depth information (RGB-D), WiFi FTM and IMU measurements for multiple participants at diverse real-world settings. Using association accuracy as the key metric for evaluating the fidelity of 𝑉𝑖𝑠𝑢𝑎𝑙4 𝑉𝑖𝑠𝑢𝑎𝑙5 Figure 1: Motivation: Successfully associating vision-wireless Vi-Fi in associating human users on camera feed with their phone IDs, we show that Vi-Fi achieves between 81% (real-time) to 91% (offline) association accuracy.

Tasks

Reproductions