SOTAVerified

Beyond Chains: Bridging Large Language Models and Knowledge Bases in Complex Question Answering

2025-05-20Unverified0· sign in to hype

Yihua Zhu, Qianying Liu, Akiko Aizawa, Hidetoshi Shimodaira

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Knowledge Base Question Answering (KBQA) aims to answer natural language questions using structured knowledge from KBs. While LLM-only approaches offer generalization, they suffer from outdated knowledge, hallucinations, and lack of transparency. Chain-based KG-RAG methods address these issues by incorporating external KBs, but are limited to simple chain-structured questions due to the absence of planning and logical structuring. Inspired by semantic parsing methods, we propose PDRR: a four-stage framework consisting of Predict, Decompose, Retrieve, and Reason. Our method first predicts the question type and decomposes the question into structured triples. Then retrieves relevant information from KBs and guides the LLM as an agent to reason over and complete the decomposed triples. Experimental results demonstrate that PDRR consistently outperforms existing methods across various LLM backbones and achieves superior performance on both chain-structured and non-chain complex questions.

Tasks

Reproductions