| Easy and Efficient Transformer : Scalable Inference Solution For large NLP model | Apr 26, 2021 | DecoderGPU | CodeCode Available | 1 |
| Sub-MoE: Efficient Mixture-of-Expert LLMs Compression via Subspace Expert Merging | Jun 29, 2025 | Inference OptimizationMixture-of-Experts | CodeCode Available | 0 |
| The Foundation Cracks: A Comprehensive Study on Bugs and Testing Practices in LLM Libraries | Jun 14, 2025 | Bug fixingInference Optimization | —Unverified | 0 |
| Brevity is the soul of sustainability: Characterizing LLM response lengths | Jun 10, 2025 | DecoderInference Optimization | CodeCode Available | 0 |
| DSMentor: Enhancing Data Science Agents with Curriculum Learning and Online Knowledge Accumulation | May 20, 2025 | In-Context LearningInference Optimization | —Unverified | 0 |
| Faster MoE LLM Inference for Extremely Large Models | May 6, 2025 | Inference OptimizationMixture-of-Experts | —Unverified | 0 |
| Energy-Efficient Transformer Inference: Optimization Strategies for Time Series Classification | Feb 23, 2025 | ClassificationInference Optimization | —Unverified | 0 |
| Hybrid Offline-online Scheduling Method for Large Language Model Inference Optimization | Feb 14, 2025 | GSM8KInference Optimization | —Unverified | 0 |
| DVFS-Aware DNN Inference on GPUs: Latency Modeling and Performance Analysis | Feb 10, 2025 | CPUInference Optimization | —Unverified | 0 |
| Hellinger-Kantorovich Gradient Flows: Global Exponential Decay of Entropy Functionals | Jan 28, 2025 | Inference Optimization | —Unverified | 0 |