Speech Tokenization
Speech tokenization is the task of representing speech signals as a sequence of discrete units. Such representations can be later used for various downstream tasks including automatic speech recognition, text-to-speech, etc. Such representation serves as the basis of Speech Language Models.
Papers
Showing 1–10 of 21 papers
No leaderboard results yet.