| Fine-tuning Quantized Neural Networks with Zeroth-order Optimization | May 19, 2025 | GPUQuantization | CodeCode Available | 1 |
| LightRetriever: A LLM-based Hybrid Retrieval Architecture with 1000x Faster Query Inference | May 18, 2025 | GPURetrieval | CodeCode Available | 1 |
| Tiny QA Benchmark++: Ultra-Lightweight, Synthetic Multilingual Dataset Generation & Smoke-Tests for Continuous LLM Evaluation | May 17, 2025 | Dataset GenerationGPU | CodeCode Available | 1 |
| Flash Invariant Point Attention | May 16, 2025 | GPU | CodeCode Available | 1 |
| SpecOffload: Unlocking Latent GPU Capacity for LLM Inference on Resource-Constrained Devices | May 15, 2025 | CPUGPU | CodeCode Available | 1 |
| FlashMLA-ETAP: Efficient Transpose Attention Pipeline for Accelerating MLA Inference on NVIDIA H20 GPUs | May 13, 2025 | GPU | CodeCode Available | 1 |
| JaxRobotarium: Training and Deploying Multi-Robot Policies in 10 Minutes | May 10, 2025 | BenchmarkingGPU | CodeCode Available | 1 |
| Fast Differentiable Modal Simulation of Non-linear Strings, Membranes, and Plates | May 9, 2025 | Audio SynthesisCPU | CodeCode Available | 1 |
| Mesh-Learner: Texturing Mesh with Spherical Harmonics | Apr 28, 2025 | 3D ReconstructionCPU | CodeCode Available | 1 |
| Taming the Titans: A Survey of Efficient LLM Inference Serving | Apr 28, 2025 | GPUMiscellaneous | CodeCode Available | 1 |