SOTAVerified

Contrastive Audio-Visual Masked Autoencoder

2022-10-02Code Available2· sign in to hype

Yuan Gong, Andrew Rouditchenko, Alexander H. Liu, David Harwath, Leonid Karlinsky, Hilde Kuehne, James Glass

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we first extend the recent Masked Auto-Encoder (MAE) model from a single modality to audio-visual multi-modalities. Subsequently, we propose the Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE) by combining contrastive learning and masked data modeling, two major self-supervised learning frameworks, to learn a joint and coordinated audio-visual representation. Our experiments show that the contrastive audio-visual correspondence learning objective not only enables the model to perform audio-visual retrieval tasks, but also helps the model learn a better joint representation. As a result, our fully self-supervised pretrained CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound, and is comparable with the previous best supervised pretrained model on AudioSet in the audio-visual event classification task. Code and pretrained models are at https://github.com/yuangongnd/cav-mae.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AudioSetCAV-MAE (Audio-Only)Test mAP0.47Unverified
AudioSetCAV-MAE (Audio-Visual)Test mAP0.51Unverified
AudioSetCAV-MAE (Visual-Only)Test mAP0.26Unverified
VGGSoundCAV-MAE (Audio-Only)Top 1 Accuracy59.5Unverified
VGGSoundCAV-MAE (Audio-Visual)Top 1 Accuracy65.9Unverified

Reproductions