SOTAVerified

Frustratingly Easy Neural Domain Adaptation

2016-12-01COLING 2016Unverified0· sign in to hype

Young-Bum Kim, Karl Stratos, Ruhi Sarikaya

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Popular techniques for domain adaptation such as the feature augmentation method of Daum\'e III (2009) have mostly been considered for sparse binary-valued features, but not for dense real-valued features such as those used in neural networks. In this paper, we describe simple neural extensions of these techniques. First, we propose a natural generalization of the feature augmentation method that uses K + 1 LSTMs where one model captures global patterns across all K domains and the remaining K models capture domain-specific information. Second, we propose a novel application of the framework for learning shared structures by Ando and Zhang (2005) to domain adaptation, and also provide a neural extension of their approach. In experiments on slot tagging over 17 domains, our methods give clear performance improvement over Daum\'e III (2009) applied on feature-rich CRFs.

Tasks

Reproductions