SOTAVerified

Nearly-Unsupervised Hashcode Representations for Biomedical Relation Extraction

2019-11-01IJCNLP 2019Unverified0· sign in to hype

Sahil Garg, Aram Galstyan, Greg Ver Steeg, Guillermo Cecchi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recently, kernelized locality sensitive hashcodes have been successfully employed as representations of natural language text, especially showing high relevance to biomedical relation extraction tasks. In this paper, we propose to optimize the hashcode representations in a nearly unsupervised manner, in which we only use data points, but not their class labels, for learning. The optimized hashcode representations are then fed to a supervised classifi er following the prior work. This nearly unsupervised approach allows fine-grained optimization of each hash function, which is particularly suitable for building hashcode representations generalizing from a training set to a test set. We empirically evaluate the proposed approach for biomedical relation extraction tasks, obtaining significant accuracy improvements w.r.t. state-of-the-art supervised and semi-supervised approaches.

Tasks

Reproductions