SOTAVerified

Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers

2021-03-26ACL (RepL4NLP) 2021Code Available0· sign in to hype

Kamil Bujel, Helen Yannakoudakis, Marek Rei

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-based architectures. As transformers contain multiple layers of multi-head self-attention, information in the sentence gets distributed between many tokens, negatively affecting zero-shot token-level performance. We find that a soft attention module which explicitly encourages sharpness of attention weights can significantly outperform existing methods.

Tasks

Reproductions