SOTAVerified

On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets

2021-04-01EACL (AdaptNLP) 2021Unverified0· sign in to hype

Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge. Nevertheless, when transfer is performed between less related domains, it brings a negative transfer, i.e. hurts the transfer performance. In this research, we shed light on the hidden negative transfer occurring when transferring from the News domain to the Tweets domain, through quantitative and qualitative analysis. Our experiments on three NLP taks: Part-Of-Speech tagging, Chunking and Named Entity recognition, reveal interesting insights.

Tasks

Reproductions