SOTAVerified

GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation

2022-04-13COLING 2022Code Available1· sign in to hype

Anthony Colas, Mehrdad Alvandipour, Daisy Zhe Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent improvements in KG-to-text generation are due to additional auxiliary pre-training tasks designed to give the fine-tune task a boost in performance. These tasks require extensive computational resources while only suggesting marginal improvements. Here, we demonstrate that by fusing graph-aware elements into existing pre-trained language models, we are able to outperform state-of-the-art models and close the gap imposed by additional pre-training tasks. We do so by proposing a mask structure to capture neighborhood information and a novel type encoder that adds a bias to the graph-attention weights depending on the connection type. Experiments on two KG-to-text benchmark datasets show our models are competitive while involving fewer parameters and no additional pre-training tasks. By formulating the problem as a framework, we can interchange the various proposed components and begin interpreting KG-to-text generative models based on the topological and type information found in a graph.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
EventNarrativeJointGTBLEU31.19Unverified
EventNarrativeT5BLEU12.8Unverified
EventNarrativeGAP - Me,r+γBLEU35.08Unverified
EventNarrativeGAP - Me,reBLEU34.02Unverified
EventNarrativeBARTBLEU31.38Unverified
WebNLG 2.0 (Unconstrained)GAP - Me,r+γBLEU66.2Unverified
WebNLG 2.0 (Unconstrained)GAP - Me,reROUGE76.22Unverified
WebNLG 2.0 (Unconstrained)JointGT (BART) - w/ JointGTPretrainBLEU65.92Unverified
WebNLG 2.0 (Unconstrained)JointGT (BART) - w/ BARTPretrainBLEU64.6Unverified
WebNLG 2.0 (Unconstrained)KGPT w/o pretrainBLEU62.3Unverified
WebNLG 2.0 (Unconstrained)GCNBLEU60.8Unverified

Reproductions