How to represent a word and predict it, too: Improving tied architectures for language modelling
2018-10-01EMNLP 2018Unverified0· sign in to hype
Kristina Gulordava, Laura Aina, Gemma Boleda
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Recent state-of-the-art neural language models share the representations of words given by the input and output mappings. We propose a simple modification to these architectures that decouples the hidden state from the word embedding prediction. Our architecture leads to comparable or better results compared to previous tied models and models without tying, with a much smaller number of parameters. We also extend our proposal to word2vec models, showing that tying is appropriate for general word prediction tasks.