QUADS: QUAntized Distillation Framework for Efficient Speech Language Understanding
Subrata Biswas, Mohammad Nur Hossain Khan, Bashima Islam
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/bashlab/quadsOfficialIn paperpytorch★ 2
Abstract
Spoken Language Understanding (SLU) systems must balance performance and efficiency, particularly in resource-constrained environments. Existing methods apply distillation and quantization separately, leading to suboptimal compression as distillation ignores quantization constraints. We propose QUADS, a unified framework that optimizes both through multi-stage training with a pre-tuned model, enhancing adaptability to low-bit regimes while maintaining accuracy. QUADS achieves 71.13\% accuracy on SLURP and 99.20\% on FSC, with only minor degradations of up to 5.56\% compared to state-of-the-art models. Additionally, it reduces computational complexity by 60--73 (GMACs) and model size by 83--700, demonstrating strong robustness under extreme quantization. These results establish QUADS as a highly efficient solution for real-world, resource-constrained SLU applications.