Enhanced Sound Event Localization and Detection in Real 360-degree audio-visual soundscapes
2024-01-29Code Available1· sign in to hype
Adrian S. Roman, Baladithya Balamurugan, Rithik Pothuganti
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/aromanusc/soundqOfficialIn paperpytorch★ 13
Abstract
This technical report details our work towards building an enhanced audio-visual sound event localization and detection (SELD) network. We build on top of the audio-only SELDnet23 model and adapt it to be audio-visual by merging both audio and video information prior to the gated recurrent unit (GRU) of the audio-only network. Our model leverages YOLO and DETIC object detectors. We also build a framework that implements audio-visual data augmentation and audio-visual synthetic data generation. We deliver an audio-visual SELDnet system that outperforms the existing audio-visual SELD baseline.