SOTAVerified

Quantifying In-Context Reasoning Effects and Memorization Effects in LLMs

2024-05-20Unverified0· sign in to hype

Siyu Lou, Yuntian Chen, Xiaodan Liang, Liang Lin, Quanshi Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this study, we propose an axiomatic system to define and quantify the precise memorization and in-context reasoning effects used by the large language model (LLM) for language generation. These effects are formulated as non-linear interactions between tokens/words encoded by the LLM. Specifically, the axiomatic system enables us to categorize the memorization effects into foundational memorization effects and chaotic memorization effects, and further classify in-context reasoning effects into enhanced inference patterns, eliminated inference patterns, and reversed inference patterns. Besides, the decomposed effects satisfy the sparsity property and the universal matching property, which mathematically guarantee that the LLM's confidence score can be faithfully decomposed into the memorization effects and in-context reasoning effects. Experiments show that the clear disentanglement of memorization effects and in-context reasoning effects enables a straightforward examination of detailed inference patterns encoded by LLMs.

Tasks

Reproductions