A Multimodal Anomaly Detector for Robot-Assisted Feeding Using an LSTM-based Variational Autoencoder
Daehyung Park, Yuuna Hoshi, Charles C. Kemp
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/chickenbestlover/RNN-Time-series-Anomaly-Detectionpytorch★ 1,305
- github.com/freedombenLiu/RNN-Time-series-Anomaly-Detectionpytorch★ 0
- github.com/timyadnyda/variational-lstm-autoencodertf★ 0
- github.com/freedombenLiu/https-github.com-chickenbestlover-RNN-Time-series-Anomaly-Detectionpytorch★ 0
- github.com/danyleb/variational-lstm-autoencodertf★ 0
Abstract
The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem. We introduce a long short-term memory based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution. We also introduce an LSTM-VAE-based detector using a reconstruction-based anomaly score and a state-based threshold. For evaluations with 1,555 robot-assisted feeding executions including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve (AUC) of 0.8710 than 5 other baseline detectors from the literature. We also show the multimodal fusion through the LSTM-VAE is effective by comparing our detector with 17 raw sensory signals versus 4 hand-engineered features.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| voraus-AD | LSTM-VAE | Avg. Detection AUROC | 86.7 | — | Unverified |