Translating Embeddings in Document for Modeling Multi-relational Graphs
Anonymous
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Document-level relation extraction (RE) aims at extracting heterogeneous relational graphs upon entities in document, which has been handled with graph neural networks (GNNs) and pre-trained language models (PLMs) effectively. However, a crucial problem is that most GNNs adopt a task-independent pseudo graph convoluation which cause the over-parameterization. Is this paper, we argue that excessive parameters are unnecessary, and further propose a novel light-weight model named TransGCN. Specifically, we obtain the representation of entities in a document through a PLM encoder and construct our transmission-based graph convolutional network (GCN) on them. Unlike the previous methods that require storing the parameters of GNNs, our transmission-based GCN is performed with message passing constrained with the transmitting scores, which can be calculated by knowledge graph embedding models. In this way, we reduce the number of parameters by about half compared to the SOTA model. We conduct experiments on DocRED, which is a large-scale human-annotated document RE dataset. The results show that we outperform the state-of-the-art model with only half amount of parameters.