SOTAVerified

Domain Generalization for Text Classification with Memory-Based Supervised Contrastive Learning

2022-10-01COLING 2022Code Available0· sign in to hype

Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While there is much research on cross-domain text classification, most existing approaches focus on one-to-one or many-to-one domain adaptation. In this paper, we tackle the more challenging task of domain generalization, in which domain-invariant representations are learned from multiple source domains, without access to any data from the target domains, and classification decisions are then made on test documents in unseen target domains. We propose a novel framework based on supervised contrastive learning with a memory-saving queue. In this way, we explicitly encourage examples of the same class to be closer and examples of different classes to be further apart in the embedding space. We have conducted extensive experiments on two Amazon review sentiment datasets, and one rumour detection dataset. Experimental results show that our domain generalization method consistently outperforms state-of-the-art domain adaptation methods.

Tasks

Reproductions