SOTAVerified

Enhancing SPARQL Generation by Triplet-order-sensitive Pre-training

2024-10-08Code Available0· sign in to hype

Chang Su, Jiexing Qi, He Yan, Kai Zou, Zhouhan Lin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Semantic parsing that translates natural language queries to SPARQL is of great importance for Knowledge Graph Question Answering (KGQA) systems. Although pre-trained language models like T5 have achieved significant success in the Text-to-SPARQL task, their generated outputs still exhibit notable errors specific to the SPARQL language, such as triplet flips. To address this challenge and further improve the performance, we propose an additional pre-training stage with a new objective, Triplet Order Correction (TOC), along with the commonly used Masked Language Modeling (MLM), to collectively enhance the model's sensitivity to triplet order and SPARQL syntax. Our method achieves state-of-the-art performances on three widely-used benchmarks.

Tasks

Reproductions