SOTAVerified

Scheduling

Project or Job Scheduling

Papers

Showing 125 of 3104 papers

TitleStatusHype
FlashInfer: Efficient and Customizable Attention Engine for LLM Inference ServingCode9
PowerInfer-2: Fast Large Language Model Inference on a SmartphoneCode9
Steering Language Models with Game-Theoretic SolversCode9
FastSwitch: Optimizing Context Switching Efficiency in Fairness-aware Large Language Model ServingCode7
The Road Less ScheduledCode7
Colossal-Auto: Unified Automation of Parallelization and Activation Checkpoint for Large-scale ModelsCode7
AssetOpsBench: Benchmarking AI Agents for Task Automation in Industrial Asset Operations and MaintenanceCode5
FlowTok: Flowing Seamlessly Across Text and Image TokensCode5
MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language ModelsCode5
Optimizing LLM Inference: Fluid-Guided Online Scheduling with Memory ConstraintsCode4
One Step Diffusion via Shortcut ModelsCode4
PixelsDB: Serverless and NL-Aided Data Analytics with Flexible Service Levels and PricesCode4
Vidur: A Large-Scale Simulation Framework For LLM InferenceCode4
ServerlessLLM: Low-Latency Serverless Inference for Large Language ModelsCode4
Orion-14B: Open-source Multilingual Large Language ModelsCode4
FedML Parrot: A Scalable Federated Learning System via Heterogeneity-aware Scheduling on Sequential and Hierarchical TrainingCode4
Vine Copulas as Differentiable Computational GraphsCode3
FlashDMoE: Fast Distributed MoE in a Single KernelCode3
Efficiently Serving LLM Reasoning Programs with CertaindexCode3
A Survey on Large Language Model Acceleration based on KV Cache ManagementCode3
A Survey on Inference Optimization Techniques for Mixture of Experts ModelsCode3
Planning in Strawberry Fields: Evaluating and Improving the Planning and Scheduling Capabilities of LRM o1Code3
LayerKV: Optimizing Large Language Model Serving with Layer-wise KV Cache ManagementCode3
FlashGS: Efficient 3D Gaussian Splatting for Large-scale and High-resolution RenderingCode3
Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-ServeCode3
Show:102550
← PrevPage 1 of 125Next →

No leaderboard results yet.