SOTAVerified

Transductive Learning of Neural Language Models for Syntactic and Semantic Analysis

2019-11-01IJCNLP 2019Unverified0· sign in to hype

Hiroki Ouchi, Jun Suzuki, Kentaro Inui

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In transductive learning, an unlabeled test set is used for model training. Although this setting deviates from the common assumption of a completely unseen test set, it is applicable in many real-world scenarios, wherein the texts to be processed are known in advance. However, despite its practical advantages, transductive learning is underexplored in natural language processing. Here we conduct an empirical study of transductive learning for neural models and demonstrate its utility in syntactic and semantic tasks. Specifically, we fine-tune language models (LMs) on an unlabeled test set to obtain test-set-specific word representations. Through extensive experiments, we demonstrate that despite its simplicity, transductive LM fine-tuning consistently improves state-of-the-art neural models in in-domain and out-of-domain settings.

Tasks

Reproductions