SOTAVerified

Efficient Autoregressive Audio Modeling via Next-Scale Prediction

2024-08-16Code Available2· sign in to hype

Kai Qiu, Xiang Li, Hao Chen, Jie Sun, Jinglu Wang, Zhe Lin, Marios Savvides, Bhiksha Raj

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Audio generation has achieved remarkable progress with the advance of sophisticated generative models, such as diffusion models (DMs) and autoregressive (AR) models. However, due to the naturally significant sequence length of audio, the efficiency of audio generation remains an essential issue to be addressed, especially for AR models that are incorporated in large language models (LLMs). In this paper, we analyze the token length of audio tokenization and propose a novel Scale-level Audio Tokenizer (SAT), with improved residual quantization. Based on SAT, a scale-level Acoustic AutoRegressive (AAR) modeling framework is further proposed, which shifts the next-token AR prediction to next-scale AR prediction, significantly reducing the training cost and inference time. To validate the effectiveness of the proposed approach, we comprehensively analyze design choices and demonstrate the proposed AAR framework achieves a remarkable 35 faster inference speed and +1.33 Fr\'echet Audio Distance (FAD) against baselines on the AudioSet benchmark. Code: https://github.com/qiuk2/AAR.

Tasks

Reproductions