SOTAVerified

Audio captioning

Audio Captioning is the task of describing audio using text. The general approach is to use an audio encoder to encode the audio (example: PANN, CAV-MAE), and to use a decoder (example: transformer) to generate the text. To judge the quality of audio captions, though machine translation metrics (BLEU, METEOR, ROUGE) and image captioning metrics (SPICE, CIDER) are used, they are not very well-suited. Attempts have been made to use pretrained language model based metrics such as Sentence-BERT.

Papers

Showing 5175 of 119 papers

TitleStatusHype
RECAP: Retrieval-Augmented Audio CaptioningCode1
Audio Difference Learning for Audio Captioning0
Training Audio Captioning Models without AudioCode1
Parameter Efficient Audio Captioning With Faithful Guidance Using Audio-text Shared Latent Representation0
Generating Realistic Images from In-the-wild Sounds0
Killing two birds with one stone: Can an audio captioning system also be used for audio-text retrieval?0
Audio Difference Captioning Utilizing Similarity-Discrepancy DisentanglementCode0
Rethinking Transfer and Auxiliary Learning for Improving Audio Captioning Transformer0
Improving Audio Caption Fluency with Automatic Error Correction0
Crowdsourcing and Evaluating Text-Based Audio Retrieval RelevancesCode0
Dual Transformer Decoder based Features Fusion Network for Automated Audio Captioning0
VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and DatasetCode2
Pengi: An Audio Language Model for Audio TasksCode2
A Whisper transformer for audio captioning trained with synthetic captions and transfer learningCode1
VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and DatasetCode2
Efficient Audio Captioning Transformer with Patchout and Text Guidance0
Prefix tuning for automated audio captioningCode1
WavCaps: A ChatGPT-Assisted Weakly-Labelled Audio Captioning Dataset for Audio-Language Multimodal ResearchCode2
Towards Generating Diverse Audio Captions via Adversarial Training0
Impact of visual assistance for automated audio captioning0
Diversity and bias in audio captioning datasets0
Is my automatic audio captioning system so bad? spider-max: a metric to consider several caption candidatesCode1
Investigations in Audio Captioning: Addressing Vocabulary Imbalance and Evaluating Suitability of Language-Centric Performance Metrics0
Exploring Train and Test-Time Augmentations for Audio-Language Learning0
Visually-Aware Audio Captioning With Adaptive Audio-Visual AttentionCode1
Show:102550
← PrevPage 3 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1VASTCIDEr0.78Unverified
2VALORCIDEr0.74Unverified
3MQ-CapSPIDEr0.52Unverified
4SLAM-AACSPIDEr0.52Unverified
5LAVCapSPIDEr0.52Unverified
6EnCLAP++-largeSPIDEr0.51Unverified
7AutoCapSPIDEr0.51Unverified
8LOAESPIDEr0.51Unverified
9EnCLAP++-baseSPIDEr0.5Unverified
10EnCLAP-largeSPIDEr0.5Unverified
#ModelMetricClaimedVerifiedStatus
1VASTCIDEr0.52Unverified
2VALORCIDEr0.42Unverified
3SLAM-AACSPIDEr0.33Unverified
4LOAESPIDEr0.33Unverified
5MQ-CapSPIDEr0.32Unverified
6EnsembleSPIDEr0.32Unverified
7Audio Flamingo (Pengi trainset)SPIDEr0.31Unverified
8Ensemble-RLSPIDEr0.3Unverified
9Qwen-AudioSPIDEr0.29Unverified
10EnsembleSPIDEr0.21Unverified