SOTAVerified

The quasi-semantic competence of LLMs: a case study on the part-whole relation

2025-04-03Unverified0· sign in to hype

Mattia Proietti, Alessandro Lenci

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Understanding the extent and depth of the semantic competence of Large Language Models (LLMs) is at the center of the current scientific agenda in Artificial Intelligence (AI) and Computational Linguistics (CL). We contribute to this endeavor by investigating their knowledge of the part-whole relation, a.k.a. meronymy, which plays a crucial role in lexical organization, but it is significantly understudied. We used data from ConceptNet relations speer2016conceptnet and human-generated semantic feature norms McRae:2005 to explore the abilities of LLMs to deal with part-whole relations. We employed several methods based on three levels of analysis: i.) behavioral testing via prompting, where we directly queried the models on their knowledge of meronymy, ii.) sentence probability scoring, where we tested models' abilities to discriminate correct (real) and incorrect (asymmetric counterfactual) part-whole relations, and iii.) concept representation analysis in vector space, where we proved the linear organization of the part-whole concept in the embedding and unembedding spaces. These analyses present a complex picture that reveals that the LLMs' knowledge of this relation is only partial. They have just a ``quasi-semantic'' competence and still fall short of capturing deep inferential properties.

Tasks

Reproductions