SOTAVerified

Mixture-of-Experts

Papers

Showing 10511100 of 1312 papers

TitleStatusHype
FaVChat: Unlocking Fine-Grained Facail Video Understanding with Multimodal Large Language Models0
FEAMOE: Fair, Explainable and Adaptive Mixture of Experts0
Federated learning using mixture of experts0
Federated Mixture of Experts0
FedMerge: Federated Personalization via Model Merging0
FedMoE-DA: Federated Mixture of Experts via Domain Aware Fine-grained Aggregation0
FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts0
Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings0
Filtered not Mixed: Stochastic Filtering-Based Online Gating for Mixture of Large Language Models0
Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations0
FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs0
FinTeamExperts: Role Specialized MOEs For Financial Analysis0
Fixing MoE Over-Fitting on Low-Resource Languages in Multilingual Machine Translation0
Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models0
FlexMoE: Scaling Large-scale Sparse Pre-trained Model Training via Dynamic Device Placement0
FloE: On-the-Fly MoE Inference on Memory-constrained GPU0
fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving0
FMT:A Multimodal Pneumonia Detection Model Based on Stacking MOE Framework0
ForceVLA: Enhancing VLA Models with a Force-aware MoE for Contact-rich Manipulation0
Free Agent in Agent-Based Mixture-of-Experts Generative AI Framework0
FreqMoE: Dynamic Frequency Enhancement for Neural PDE Solvers0
Fresh-CL: Feature Realignment through Experts on Hypersphere in Continual Learning0
From Google Gemini to OpenAI Q* (Q-Star): A Survey of Reshaping the Generative Artificial Intelligence (AI) Research Landscape0
FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models0
Full-Precision Free Binary Graph Neural Networks0
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs0
Functional mixture-of-experts for classification0
FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion0
FuxiMT: Sparsifying Large Language Models for Chinese-Centric Multilingual Machine Translation0
Galaxy Walker: Geometry-aware VLMs For Galaxy-scale Understanding0
Gated Ensemble of Spatio-temporal Mixture of Experts for Multi-task Learning in Ride-hailing System0
Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers0
GEMNET: Effective Gated Gazetteer Representations for Recognizing Complex Entities in Low-context Input0
Generalizable Person Re-identification with Relevance-aware Mixture of Experts0
Generalization Error Analysis for Sparse Mixture-of-Experts: A Preliminary Study0
Generalizing Multimodal Variational Methods to Sets0
Generator Assisted Mixture of Experts For Feature Acquisition in Batch0
GeRM: A Generalist Robotic Model with Mixture-of-experts for Quadruped Robot0
GETS: Ensemble Temperature Scaling for Calibration in Graph Neural Networks0
GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture0
GLA in MediaEval 2018 Emotional Impact of Movies Task0
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts0
GM-MoE: Low-Light Enhancement with Gated-Mechanism Mixture-of-Experts0
GradPower: Powering Gradients for Faster Language Model Pre-Training0
Graph Mixture of Experts and Memory-augmented Routers for Multivariate Time Series Anomaly Detection0
GRAPHMOE: Amplifying Cognitive Depth of Mixture-of-Experts Network via Introducing Self-Rethinking Mechanism0
GRIN: GRadient-INformed MoE0
HAECcity: Open-Vocabulary Scene Understanding of City-Scale Point Clouds with Superpoint Graph Clustering0
Half-Space Feature Learning in Neural Networks0
Hard Mixtures of Experts for Large Scale Weakly Supervised Vision0
Show:102550
← PrevPage 22 of 27Next →

No leaderboard results yet.