SOTAVerified

Posterior-regularized REINFORCE for Instance Selection in Distant Supervision

2019-04-17NAACL 2019Code Available0· sign in to hype

Qi Zhang, Siliang Tang, Xiang Ren, Fei Wu, ShiLiang Pu, Yueting Zhuang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper provides a new way to improve the efficiency of the REINFORCE training process. We apply it to the task of instance selection in distant supervision. Modeling the instance selection in one bag as a sequential decision process, a reinforcement learning agent is trained to determine whether an instance is valuable or not and construct a new bag with less noisy instances. However unbiased methods, such as REINFORCE, could usually take much time to train. This paper adopts posterior regularization (PR) to integrate some domain-specific rules in instance selection using REINFORCE. As the experiment results show, this method remarkably improves the performance of the relation classifier trained on cleaned distant supervision dataset as well as the efficiency of the REINFORCE training.

Tasks

Reproductions