SOTAVerified

Long-context Language Models Cannot Retrieve Without Sufficient Steps

2024-10-06Code Available0· sign in to hype

Yijiong Yu, Ma Xiufa, Fang Jianwei, Zhi Xu, Su Guangyao, Wang Jiancheng, Yongfeng Huang, Zhixiao Qi, Wei Wang, Weifeng Liu, Ran Chen, Ji Pei

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Long-context language models (LCLMs), characterized by their extensive context window, are becoming popular. However, despite they are nearly perfect at standard long-context retrieval tasks, we find they are not good at all types of retrieval tasks. Specifically, we identify 2 basic cases, "multi-matching retrieval," and "logic-based retrieval", which are beyond LCLMs' ability boundary under normal settings. Later, we find these cases can be well addressed with a specific number of reasoning steps, guided by specific CoT prompts, but it may cost too much time. Thus we propose a critical viewpoint that there are currently no perfect solutions for current LCLMs to solve all types of retrieval tasks. Our work reveals some novel properties of retrieval tasks and LCLMs, proving that long-context handling still has a long way to go.

Tasks

Reproductions