SOTAVerified

Long-range modeling

A new task for testing the long-sequence modeling capabilities and efficiency of language models.

Image credit: SCROLLS: Standardized CompaRison Over Long Language Sequences

Papers

Showing 7180 of 95 papers

TitleStatusHype
Dyadformer: A Multi-modal Transformer for Long-Range Modeling of Dyadic Interactions0
Is Long Range Sequential Modeling Necessary For Colorectal Tumor Segmentation?0
S7: Selective and Simplified State Space Layers for Sequence Modeling0
ZipIR: Latent Pyramid Diffusion Transformer for High-Resolution Image Restoration0
Diagonal State Spaces are as Effective as Structured State Spaces0
CoLT5: Faster Long-Range Transformers with Conditional Computation0
Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences0
Long-Range Modeling of Source Code Files with eWASH: Extended Window Access by Syntax Hierarchy0
Shuffle Mamba: State Space Models with Random Shuffle for Multi-Modal Image Fusion0
Pose Magic: Efficient and Temporally Consistent Human Pose Estimation with a Hybrid Mamba-GCN Network0
Show:102550
← PrevPage 8 of 10Next →

No leaderboard results yet.