SOTAVerified

Learning Spoken Language Representations with Neural Lattice Language Modeling

2020-07-06ACL 2020Code Available1· sign in to hype

Chao-Wei Huang, Yun-Nung Chen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Pre-trained language models have achieved huge improvement on many NLP tasks. However, these methods are usually designed for written text, so they do not consider the properties of spoken language. Therefore, this paper aims at generalizing the idea of language model pre-training to lattices generated by recognition systems. We propose a framework that trains neural lattice language models to provide contextualized representations for spoken language understanding tasks. The proposed two-stage pre-training approach reduces the demands of speech data and has better efficiency. Experiments on intent detection and dialogue act recognition datasets demonstrate that our proposed method consistently outperforms strong baselines when evaluated on spoken inputs. The code is available at https://github.com/MiuLab/Lattice-ELMo.

Tasks

Reproductions