A Survey on Speech Large Language Models
Jing Peng, Yucheng Wang, Yangui Fang, Yu Xi, Xu Li, Xizhuo Zhang, Kai Yu
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Large Language Models (LLMs) exhibit strong contextual understanding and remarkable multitask performance. As a result, researchers have been actively exploring the integration of LLMs into the domain of speech understanding, with a primary focus on a broad range of speech-to-text tasks. These include automatic speech recognition (ASR), speech-to-text translation (ST), speech emotion recognition (SER), and others. We refer to such models as Speech LLMs, which are typically built on a unified architecture that follows the pipeline of Audio Feature Extraction -> Multimodal Information Fusion -> LLM Inference. This approach enables richer audio feature extraction while facilitating end-to-end fusion of audio and text modalities, thereby achieving deeper understanding and reasoning from audio data. This paper elucidates the development of Speech LLMs, offering an in-depth analysis of system architectures. Through extensive research and a series of targeted experiments, the paper assesses the advancements in Speech LLMs and their potential for cross-task integration within the speech understanding field. Furthermore, it highlights key challenges identified through experimentation, such as the dormancy of LLMs under certain conditions. The paper further explores training strategies for Speech LLMs, proposes potential solutions based on these findings, and offers valuable insights and references for future research.