SOTAVerified

Do KG-augmented Models Leverage Knowledge as Humans Do?

2022-01-17ICLR Track Blog 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Knowledge Graphs (KGs) can help neural-symbolic models to improve performance on various knowledge-intensive tasks, like recommendation systems and question answering. Concretely, neural reasoning over KGs may "explain" which information is relevant for inference. However, as an old saying goes, "seeing is not believing," it is natural to ask the question, "do KG-augmented models really behave as we expect?" This post presents the historical perspectives of KG-augmented models and discusses a recent work raising this question. Interestingly, empirical results demonstrate that perturbed KGs can maintain the downstream performance, which subvert our cognition over KG-augmented models' ability. We believe this topic is necessary and important for neural-symbolic reasoning and can guide future work on designing KG-augmented models.

Tasks

Reproductions