SOTAVerified

Relation Extraction with Explanation

2020-05-28ACL 2020Unverified0· sign in to hype

Hamed Shahbazi, Xiaoli Z. Fern, Reza Ghaeini, Prasad Tadepalli

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent neural models for relation extraction with distant supervision alleviate the impact of irrelevant sentences in a bag by learning importance weights for the sentences. Efforts thus far have focused on improving extraction accuracy but little is known about their explainability. In this work we annotate a test set with ground-truth sentence-level explanations to evaluate the quality of explanations afforded by the relation extraction models. We demonstrate that replacing the entity mentions in the sentences with their fine-grained entity types not only enhances extraction accuracy but also improves explanation. We also propose to automatically generate "distractor" sentences to augment the bags and train the model to ignore the distractors. Evaluations on the widely used FB-NYT dataset show that our methods achieve new state-of-the-art accuracy while improving model explainability.

Tasks

Reproductions