Robust NL-to-Cypher Translation for KBQA: Harnessing Large Language Model with Chain of Prompts
Guandong Feng, Guoliang Zhu, Shengze Shi, Yue Sun, Zhongyi Fan, Sulin Gao, Jun Hu
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Knowledge Base Question Answering (KBQA) is a significant task in natural language processing, aiming to retrieve answers from structured knowledge bases in response to natural language questions. NL2Cypher is crucial for accurately querying answers from knowledge bases, but there is limited research in this area or the results are unsatisfactory. Our work explores the convergence of advanced natural language processing techniques with knowledge base question answering (KBQA), focusing on the automated generation of Cypher queries from natural language queries. By leveraging the capabilities of large language model (LLM), our approach bridges the gap between textual questions and structured knowledge representations. The proposed methodology showcases promising results in accurately formulating Cypher queries. We achieved substantial performance in the CCKS2023 Foreign Military Unmanned Systems Knowledge Graph Reasoning Question-Answering Evaluation Task. Our method achieved an F1 score of 0.94269 on the final testing dataset.