| I-BERT: Integer-only BERT Quantization | Jan 5, 2021 | GPUNatural Language Inference | CodeCode Available | 2 |
| Accelerating Transformer Pre-training with 2:4 Sparsity | Apr 2, 2024 | GPU | CodeCode Available | 2 |
| HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading | Feb 18, 2025 | Computational EfficiencyCPU | CodeCode Available | 2 |
| Hardware-Aware Parallel Prompt Decoding for Memory-Efficient Acceleration of LLM Inference | May 28, 2024 | GPUText Generation | CodeCode Available | 2 |
| Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow | Jun 3, 2024 | GPULanguage Modeling | CodeCode Available | 2 |
| Characterization of Large Language Model Development in the Datacenter | Mar 12, 2024 | GPULanguage Modeling | CodeCode Available | 2 |
| H_2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models | Jun 24, 2023 | GPU | CodeCode Available | 2 |
| Habitat 2.0: Training Home Assistants to Rearrange their Habitat | Jun 28, 2021 | Deep Reinforcement LearningGPU | CodeCode Available | 2 |
| Grouping First, Attending Smartly: Training-Free Acceleration for Diffusion Transformers | May 20, 2025 | GPUVideo Generation | CodeCode Available | 2 |
| Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient | Nov 26, 2024 | GPUImage Generation | CodeCode Available | 2 |