ATST: Audio Representation Learning with Teacher-Student Transformer
Xian Li, Xiaofei Li
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Audio-WestlakeU/audiossl/tree/main/audiossl/methods/atstOfficialpytorch★ 0
- github.com/Audio-WestlakeU/ATST-SEDpytorch★ 161
- github.com/2024-MindSpore-1/Code6/tree/main/atsmindspore★ 0
- github.com/2023-MindSpore-4/Code8/tree/main/atsmindspore★ 0
Abstract
Self-supervised learning (SSL) learns knowledge from a large amount of unlabeled data, and then transfers the knowledge to a specific problem with a limited number of labeled data. SSL has achieved promising results in various domains. This work addresses the problem of segment-level general audio SSL, and proposes a new transformer-based teacher-student SSL model, named ATST. A transformer encoder is developed on a recently emerged teacher-student baseline scheme, which largely improves the modeling capability of pre-training. In addition, a new strategy for positive pair creation is designed to fully leverage the capability of transformer. Extensive experiments have been conducted, and the proposed model achieves the new state-of-the-art results on almost all of the downstream tasks.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Balanced Audio Set | Base (ours) | Mean AP | 37.4 | — | Unverified |