SOTAVerified

Great~Truths~are ~Always ~Simple: A Rather Simple Knowledge Encoder for Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Commonsense reasoning in natural language is a desired capacity of artificial intelligent systems. For solving complex commonsense reasoning tasks, a typical approach is to enhance pre-trained language models~(PTM) by a knowledge-aware graph neural network~(GNN) encoder that leverages commonsense knowledge graphs~(CSKGs).Despite the effectiveness, these approaches are built in heavy architectures, and can't clearly explain how external knowledge resources improve the reasoning capacity of PTMs. Considering this issue, we conduct deep empirical analysis, and find that it is indeed relation features from CSKGs (but not node features) that mainly contribute to the performance improvement of PTM. Based on this finding, we design a simple MLP-based knowledge encoder by utilizing statistical relation paths as features. Extensive experiments conducted on five benchmarks demonstrate the effectiveness of our approach, which also largely reduces the parameters for encoding CSKGs.

Tasks

Reproductions