SOTAVerified

Mixture-of-Experts

Papers

Showing 9911000 of 1312 papers

TitleStatusHype
Task-Based MoE for Multitask Multilingual Machine Translation0
SwapMoE: Serving Off-the-shelf MoE-based Large Language Models with Tunable Memory Budget0
EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE0
Beyond Sharing: Conflict-Aware Multivariate Time Series Anomaly DetectionCode0
FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs0
Experts Weights Averaging: A New General Training Scheme for Vision Transformers0
A Novel Temporal Multi-Gate Mixture-of-Experts Approach for Vehicle Trajectory and Driving Intention Prediction0
Uncertainty-Encoded Multi-Modal Fusion for Robust Object Detection in Autonomous Driving0
Domain-Agnostic Neural Architecture for Class Incremental Continual Learning in Document Processing PlatformCode0
Bidirectional Attention as a Mixture of Continuous Word ExpertsCode0
Show:102550
← PrevPage 100 of 132Next →

No leaderboard results yet.