SOTAVerified

Mixture-of-Experts

Papers

Showing 851875 of 1312 papers

TitleStatusHype
k-Winners-Take-All Ensemble Neural NetworkCode0
Fast Inference of Mixture-of-Experts Language Models with OffloadingCode4
Efficient Deweather Mixture-of-Experts with Uncertainty-aware Feature-wise Linear Modulation0
Agent4Ranking: Semantic Robust Ranking via Personalized Query Rewriting Using Multi-agent LLM0
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-ScalingCode3
FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and EditingCode1
Aurora:Activating Chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-TuningCode2
Generator Assisted Mixture of Experts For Feature Acquisition in Batch0
Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning0
From Google Gemini to OpenAI Q* (Q-Star): A Survey of Reshaping the Generative Artificial Intelligence (AI) Research Landscape0
When Parameter-efficient Tuning Meets General-purpose Vision-language ModelsCode1
LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style PluginCode2
Online Action Recognition for Human Risk Prediction with Anticipated Haptic Alert via WearablesCode0
Training of Neural Networks with Uncertain Data: A Mixture of Experts Approach0
SwitchHead: Accelerating Transformers with Mixture-of-Experts AttentionCode1
Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-ExpertsCode1
HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of ExpertsCode1
Mixture-of-Linear-Experts for Long-term Time Series ForecastingCode1
GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned ExpertsCode1
MoE-AMC: Enhancing Automatic Modulation Classification Performance Using Mixture-of-Experts0
MoEC: Mixture of Experts Implicit Neural Compression0
Language-driven All-in-one Adverse Weather Removal0
Omni-SMoLA: Boosting Generalist Multimodal Models with Soft Mixture of Low-rank Experts0
HOMOE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts0
Efficient Model Agnostic Approach for Implicit Neural Representation Based Arbitrary-Scale Image Super-Resolution0
Show:102550
← PrevPage 35 of 53Next →

No leaderboard results yet.