SOTAVerified

Language Models Learn POS First

2018-11-01WS 2018Unverified0· sign in to hype

Naomi Saphra, Adam Lopez

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

A glut of recent research shows that language models capture linguistic structure. Such work answers the question of whether a model represents linguistic structure. But how and when are these structures acquired? Rather than treating the training process itself as a black box, we investigate how representations of linguistic structure are learned over time. In particular, we demonstrate that different aspects of linguistic structure are learned at different rates, with part of speech tagging acquired early and global topic information learned continuously.

Tasks

Reproductions