Slow-Fast Auditory Streams For Audio Recognition
2021-03-05Code Available1· sign in to hype
Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ekazakos/auditory-slow-fastOfficialIn paperpytorch★ 73
- github.com/porcelluscavia/audio-modelpytorch★ 3
Abstract
We propose a two-stream convolutional network for audio recognition, that operates on time-frequency spectrogram inputs. Following similar success in visual recognition, we learn Slow-Fast auditory streams with separable convolutions and multi-level lateral connections. The Slow pathway has high channel capacity while the Fast pathway operates at a fine-grained temporal resolution. We showcase the importance of our two-stream proposal on two diverse datasets: VGG-Sound and EPIC-KITCHENS-100, and achieve state-of-the-art results on both.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| EPIC-SOUNDS | Slow-Fast(Finetune by Fivewin team) | Top-1 accuracy % | 55.11 | — | Unverified |