SOTAVerified

Inference Optimization

Papers

Showing 110 of 56 papers

TitleStatusHype
Sub-MoE: Efficient Mixture-of-Expert LLMs Compression via Subspace Expert MergingCode0
The Foundation Cracks: A Comprehensive Study on Bugs and Testing Practices in LLM Libraries0
Brevity is the soul of sustainability: Characterizing LLM response lengthsCode0
DSMentor: Enhancing Data Science Agents with Curriculum Learning and Online Knowledge Accumulation0
Faster MoE LLM Inference for Extremely Large Models0
SimpleAR: Pushing the Frontier of Autoregressive Visual Generation through Pretraining, SFT, and RLCode3
Optimizing LLM Inference: Fluid-Guided Online Scheduling with Memory ConstraintsCode4
The 1st Solution for 4th PVUW MeViS Challenge: Unleashing the Potential of Large Multimodal Models for Referring Video SegmentationCode5
Energy-Efficient Transformer Inference: Optimization Strategies for Time Series Classification0
Hybrid Offline-online Scheduling Method for Large Language Model Inference Optimization0
Show:102550
← PrevPage 1 of 6Next →

No leaderboard results yet.