SOTAVerified

Long-range modeling

A new task for testing the long-sequence modeling capabilities and efficiency of language models.

Image credit: SCROLLS: Standardized CompaRison Over Long Language Sequences

Papers

Showing 2130 of 95 papers

TitleStatusHype
LongT5: Efficient Text-To-Text Transformer for Long SequencesCode1
Adapting Pretrained Text-to-Text Models for Long Text SequencesCode1
Efficient Long-Text Understanding with Short-Text ModelsCode1
CAB: Comprehensive Attention Benchmarking on Long Sequence ModelingCode1
Efficiently Modeling Long Sequences with Structured State SpacesCode1
Image Super-Resolution With Non-Local Sparse AttentionCode1
Long Range Propagation on Continuous-Time Dynamic GraphsCode1
Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT OperatorCode1
DSANet: Dynamic Segment Aggregation Network for Video-Level Representation LearningCode1
A Simple LLM Framework for Long-Range Video Question-AnsweringCode1
Show:102550
← PrevPage 3 of 10Next →

No leaderboard results yet.