SOTAVerified

Long-range modeling

A new task for testing the long-sequence modeling capabilities and efficiency of language models.

Image credit: SCROLLS: Standardized CompaRison Over Long Language Sequences

Papers

Showing 2650 of 95 papers

TitleStatusHype
Hierarchical Separable Video Transformer for Snapshot Compressive ImagingCode1
Long Range Propagation on Continuous-Time Dynamic GraphsCode1
Spatio-Spectral Graph Neural NetworksCode1
A Simple LLM Framework for Long-Range Video Question-AnsweringCode1
Recurrent Distance Filtering for Graph Representation LearningCode1
ViTEraser: Harnessing the Power of Vision Transformers for Scene Text Removal with SegMIM PretrainingCode1
Sparse Modular Activation for Efficient Sequence ModelingCode1
The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon TasksCode1
Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal RepresentationCode1
Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT OperatorCode1
T-former: An Efficient Transformer for Image InpaintingCode1
What Makes Convolutional Models Great on Long Sequence Modeling?Code1
CAB: Comprehensive Attention Benchmarking on Long Sequence ModelingCode1
Multi-scale Attention Network for Single Image Super-ResolutionCode1
Adapting Pretrained Text-to-Text Models for Long Text SequencesCode1
U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?Code1
Efficient Long-Text Understanding with Short-Text ModelsCode1
Weakly Supervised Object Localization via Transformer with Implicit Spatial CalibrationCode1
ChordMixer: A Scalable Neural Attention Model for Sequences with Different LengthsCode1
UL2: Unifying Language Learning ParadigmsCode1
Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-AttentionCode1
SCROLLS: Standardized CompaRison Over Long Language SequencesCode1
Classification of Long Sequential Data using Circular Dilated Convolutional Neural NetworksCode1
LongT5: Efficient Text-To-Text Transformer for Long SequencesCode1
Efficiently Modeling Long Sequences with Structured State SpacesCode1
Show:102550
← PrevPage 2 of 4Next →

No leaderboard results yet.