SOTAVerified

Paragraph-based Transformer Pretraining for Multi-Sentence Inference

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show that these tasks benefit from modeling dependencies across multiple candidate sentences `jointly'. In this paper, we first show that popular pretrained transformers perform poorly when used for fine-tuning on multi-candidate inference tasks. We then propose a new pretraining objective that models the paragraph-level semantics across multiple input sentences. Our evaluation on three AS2, and one fact verification dataset demonstrates the superiority of our pretrained joint models over pretrained transformers for multi-candidate inference tasks.

Tasks

Reproductions