SOTAVerified

Long-range modeling

A new task for testing the long-sequence modeling capabilities and efficiency of language models.

Image credit: SCROLLS: Standardized CompaRison Over Long Language Sequences

Papers

Showing 4150 of 95 papers

TitleStatusHype
U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?Code1
Efficient Long-Text Understanding with Short-Text ModelsCode1
Weakly Supervised Object Localization via Transformer with Implicit Spatial CalibrationCode1
ChordMixer: A Scalable Neural Attention Model for Sequences with Different LengthsCode1
UL2: Unifying Language Learning ParadigmsCode1
Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-AttentionCode1
SCROLLS: Standardized CompaRison Over Long Language SequencesCode1
Classification of Long Sequential Data using Circular Dilated Convolutional Neural NetworksCode1
LongT5: Efficient Text-To-Text Transformer for Long SequencesCode1
Efficiently Modeling Long Sequences with Structured State SpacesCode1
Show:102550
← PrevPage 5 of 10Next →

No leaderboard results yet.