SOTAVerified

Bag-of-Words Transfer: Non-Contextual Techniques for Multi-Task Learning

2019-11-01WS 2019Unverified0· sign in to hype

Seth Ebner, Felicity Wang, Benjamin Van Durme

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Many architectures for multi-task learning (MTL) have been proposed to take advantage of transfer among tasks, often involving complex models and training procedures. In this paper, we ask if the sentence-level representations learned in previous approaches provide significant benefit beyond that provided by simply improving word-based representations. To investigate this question, we consider three techniques that ignore sequence information: a syntactically-oblivious pooling encoder, pre-trained non-contextual word embeddings, and unigram generative regularization. Compared to a state-of-the-art MTL approach to textual inference, the simple techniques we use yield similar performance on a universe of task combinations while reducing training time and model size.

Tasks

Reproductions