SOTAVerified

Long-Context Understanding

Papers

Showing 110 of 81 papers

TitleStatusHype
Ref-Long: Benchmarking the Long-context Referencing Capability of Long-context Language ModelsCode0
Cache Me If You Can: How Many KVs Do You Need for Effective Long-Context LMs?Code1
PaceLLM: Brain-Inspired Large Language Models for Long-Context Understanding0
DAM: Dynamic Attention Mask for Long-Context Large Language Model Inference AccelerationCode1
MesaNet: Sequence Modeling by Locally Optimal Test-Time TrainingCode0
ATLAS: Learning to Optimally Memorize the Context at Test Time0
SpecExtend: A Drop-in Enhancement for Speculative Decoding of Long SequencesCode0
Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM CompressionCode1
MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language ModelsCode1
Beyond Needle(s) in the Embodied Haystack: Environment, Architecture, and Training Considerations for Long Context Reasoning0
Show:102550
← PrevPage 1 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4-Turbo-11061k74Unverified
2GPT-4-Turbo-01251k73.5Unverified
3Claude-21k65Unverified
4GPT-3.5-Turbo-11061k61.5Unverified
5InternLM2-7b1k58.6Unverified
6Vicuna-13b-v1.5-16k1k53.4Unverified
7ChatGLM3-6b-32k1k39.8Unverified
8Vicuna-7b-v1.5-16k1k37Unverified
9LongChat-7b-v1.5-32k1k32.4Unverified
10ChatGLM2-6b-32k1k31.2Unverified