| The 1st Solution for 4th PVUW MeViS Challenge: Unleashing the Potential of Large Multimodal Models for Referring Video Segmentation | Apr 7, 2025 | Inference OptimizationReferring Video Object Segmentation | CodeCode Available | 5 |
| Optimizing LLM Inference: Fluid-Guided Online Scheduling with Memory Constraints | Apr 15, 2025 | GPUInference Optimization | CodeCode Available | 4 |
| SimpleAR: Pushing the Frontier of Autoregressive Visual Generation through Pretraining, SFT, and RL | Apr 15, 2025 | Inference Optimization | CodeCode Available | 3 |
| A Survey on Inference Optimization Techniques for Mixture of Experts Models | Dec 18, 2024 | Computational EfficiencyDistributed Computing | CodeCode Available | 3 |
| Inference Performance Optimization for Large Language Models on CPUs | Jul 10, 2024 | CPUGPU | CodeCode Available | 3 |
| CycleBNN: Cyclic Precision Training in Binary Neural Networks | Sep 28, 2024 | Inference Optimization | CodeCode Available | 2 |
| Painterly Image Harmonization using Diffusion Model | Aug 4, 2023 | Generative Adversarial NetworkImage Harmonization | CodeCode Available | 1 |
| Adaptive Deep Neural Network Inference Optimization with EENet | Jan 15, 2023 | Inference OptimizationScheduling | CodeCode Available | 1 |
| ADJUST: A Dictionary-Based Joint Reconstruction and Unmixing Method for Spectral Tomography | Dec 21, 2021 | 3D ReconstructionComputed Tomography (CT) | CodeCode Available | 1 |
| A Novel 1D State Space for Efficient Music Rhythmic Analysis | Nov 1, 2021 | Inference OptimizationOnline Beat Tracking | CodeCode Available | 1 |
| Easy and Efficient Transformer : Scalable Inference Solution For large NLP model | Apr 26, 2021 | DecoderGPU | CodeCode Available | 1 |
| Sub-MoE: Efficient Mixture-of-Expert LLMs Compression via Subspace Expert Merging | Jun 29, 2025 | Inference OptimizationMixture-of-Experts | CodeCode Available | 0 |
| The Foundation Cracks: A Comprehensive Study on Bugs and Testing Practices in LLM Libraries | Jun 14, 2025 | Bug fixingInference Optimization | —Unverified | 0 |
| Brevity is the soul of sustainability: Characterizing LLM response lengths | Jun 10, 2025 | DecoderInference Optimization | CodeCode Available | 0 |
| DSMentor: Enhancing Data Science Agents with Curriculum Learning and Online Knowledge Accumulation | May 20, 2025 | In-Context LearningInference Optimization | —Unverified | 0 |
| Faster MoE LLM Inference for Extremely Large Models | May 6, 2025 | Inference OptimizationMixture-of-Experts | —Unverified | 0 |
| Energy-Efficient Transformer Inference: Optimization Strategies for Time Series Classification | Feb 23, 2025 | ClassificationInference Optimization | —Unverified | 0 |
| Hybrid Offline-online Scheduling Method for Large Language Model Inference Optimization | Feb 14, 2025 | GSM8KInference Optimization | —Unverified | 0 |
| DVFS-Aware DNN Inference on GPUs: Latency Modeling and Performance Analysis | Feb 10, 2025 | CPUInference Optimization | —Unverified | 0 |
| Hellinger-Kantorovich Gradient Flows: Global Exponential Decay of Entropy Functionals | Jan 28, 2025 | Inference Optimization | —Unverified | 0 |
| FluidML: Fast and Memory Efficient Inference Optimization | Nov 14, 2024 | Autonomous VehiclesInference Optimization | —Unverified | 0 |
| A Temporal Linear Network for Time Series Forecasting | Oct 28, 2024 | Computational EfficiencyInference Optimization | CodeCode Available | 0 |
| LLM-Rank: A Graph Theoretical Approach to Pruning Large Language Models | Oct 17, 2024 | Inference OptimizationNetwork Pruning | CodeCode Available | 0 |
| EdgeRL: Reinforcement Learning-driven Deep Learning Model Inference Optimization at Edge | Oct 16, 2024 | Deep LearningInference Optimization | —Unverified | 0 |
| Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning | Sep 2, 2024 | Inference OptimizationLanguage Modeling | —Unverified | 0 |