SOTAVerified

Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing

2019-08-16Unverified0· sign in to hype

Jianquan Li, Xiaokang Liu, Wenpeng Yin, Min Yang, Liqun Ma, Yaohong Jin

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Multi-Task Learning (MTL) aims at boosting the overall performance of each individual task by leveraging useful information contained in multiple related tasks. It has shown great success in natural language processing (NLP). Currently, a number of MLT architectures and learning mechanisms have been proposed for various NLP tasks. However, there is no systematic exploration and comparison of different MLT architectures and learning mechanisms for their strong performance in-depth. In this paper, we conduct a thorough examination of typical MTL methods on a broad range of representative NLP tasks. Our primary goal is to understand the merits and demerits of existing MTL methods in NLP tasks, thus devising new hybrid architectures intended to combine their strengths.

Tasks

Reproductions