SOTAVerified

Long-Context Understanding

Papers

Showing 110 of 81 papers

TitleStatusHype
Ref-Long: Benchmarking the Long-context Referencing Capability of Long-context Language ModelsCode0
Cache Me If You Can: How Many KVs Do You Need for Effective Long-Context LMs?Code1
PaceLLM: Brain-Inspired Large Language Models for Long-Context Understanding0
DAM: Dynamic Attention Mask for Long-Context Large Language Model Inference AccelerationCode1
MesaNet: Sequence Modeling by Locally Optimal Test-Time TrainingCode0
ATLAS: Learning to Optimally Memorize the Context at Test Time0
SpecExtend: A Drop-in Enhancement for Speculative Decoding of Long SequencesCode0
Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM CompressionCode1
MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language ModelsCode1
Beyond Needle(s) in the Embodied Haystack: Environment, Architecture, and Training Considerations for Long Context Reasoning0
Show:102550
← PrevPage 1 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GALI(Llama3-8b-ins-4k-to-16k)Average Score46.22Unverified
2GALI(Llama3-8b-ins-8k-to-32k)Average Score45.38Unverified
3GALI(Llama3-8b-ins-8k-to-16k)Average Score45.17Unverified