| Simple and Fast Distillation of Diffusion Models | Sep 29, 2024 | GPUImage Generation | CodeCode Available | 3 |
| 3DGS-LM: Faster Gaussian-Splatting Optimization with Levenberg-Marquardt | Sep 19, 2024 | 3DGSGPU | CodeCode Available | 3 |
| LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via a Hybrid Architecture | Sep 4, 2024 | GPUMamba | CodeCode Available | 3 |
| LinFusion: 1 GPU, 1 Minute, 16K Image | Sep 3, 2024 | 16kCausal Inference | CodeCode Available | 3 |
| InstanSeg: an embedding-based instance segmentation algorithm optimized for accurate, efficient and portable cell segmentation | Aug 28, 2024 | Cell SegmentationGPU | CodeCode Available | 3 |
| OctFusion: Octree-based Diffusion Models for 3D Shape Generation | Aug 27, 2024 | 3D Generation3D Shape Generation | CodeCode Available | 3 |
| The Mamba in the Llama: Distilling and Accelerating Hybrid Models | Aug 27, 2024 | GPULanguage Modeling | CodeCode Available | 3 |
| Accelerating Goal-Conditioned RL Algorithms and Research | Aug 20, 2024 | GPUreinforcement-learning | CodeCode Available | 3 |
| ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models | Aug 16, 2024 | GPUModel Compression | CodeCode Available | 3 |
| LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale | Aug 10, 2024 | GPULanguage Modelling | CodeCode Available | 3 |