| GETS: Ensemble Temperature Scaling for Calibration in Graph Neural Networks | Oct 12, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Retraining-Free Merging of Sparse MoE via Hierarchical Clustering | Oct 11, 2024 | ClusteringLanguage Modeling | CodeCode Available | 1 |
| Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts | Oct 10, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training | Oct 10, 2024 | Mixture-of-ExpertsVisual Question Answering | —Unverified | 0 |
| More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing | Oct 10, 2024 | image-classificationImage Classification | CodeCode Available | 0 |
| Efficient Dictionary Learning with Switch Sparse Autoencoders | Oct 10, 2024 | Dictionary LearningMixture-of-Experts | CodeCode Available | 1 |
| Upcycling Large Language Models into Mixture of Experts | Oct 10, 2024 | Mixture-of-ExpertsMMLU | —Unverified | 0 |
| MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts | Oct 9, 2024 | GPUMixture-of-Experts | CodeCode Available | 4 |
| Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs | Oct 9, 2024 | Common Sense ReasoningMixture-of-Experts | —Unverified | 0 |
| Toward generalizable learning of all (linear) first-order methods via memory augmented Transformers | Oct 8, 2024 | AllMixture-of-Experts | —Unverified | 0 |
| Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models | Oct 8, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Aria: An Open Multimodal Native Mixture-of-Experts Model | Oct 8, 2024 | Instruction FollowingMixture-of-Experts | CodeCode Available | 5 |
| Probing the Robustness of Theory of Mind in Large Language Models | Oct 8, 2024 | Mixture-of-Experts | —Unverified | 0 |
| MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More | Oct 8, 2024 | Mixture-of-ExpertsQuantization | CodeCode Available | 2 |
| Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild | Oct 7, 2024 | BenchmarkingMixture-of-Experts | CodeCode Available | 1 |
| Multimodal Fusion Strategies for Mapping Biophysical Landscape Features | Oct 7, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| Realizing Video Summarization from the Path of Language-based Semantic Understanding | Oct 6, 2024 | Mixture-of-ExpertsVideo Generation | —Unverified | 0 |
| A Dynamic Approach to Stock Price Prediction: Comparing RNN and Mixture of Experts Models Across Different Volatility Profiles | Oct 4, 2024 | Mixture-of-ExpertsStock Price Prediction | —Unverified | 0 |
| Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding with LLMs | Oct 4, 2024 | Contrastive LearningDenoising | —Unverified | 0 |
| MLP-KAN: Unifying Deep Representation and Function Learning | Oct 3, 2024 | Kolmogorov-Arnold NetworksMixture-of-Experts | CodeCode Available | 0 |
| On Expert Estimation in Hierarchical Mixture of Experts: Beyond Softmax Gating Functions | Oct 3, 2024 | image-classificationImage Classification | —Unverified | 0 |
| Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices | Oct 3, 2024 | Mixture-of-Experts | CodeCode Available | 1 |
| Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping | Oct 3, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts | Oct 3, 2024 | Mixture-of-Expertsparameter estimation | CodeCode Available | 0 |
| Neutral residues: revisiting adapters for model extension | Oct 3, 2024 | Domain AdaptationLanguage Modelling | —Unverified | 0 |
| EC-DIT: Scaling Diffusion Transformers with Adaptive Expert-Choice Routing | Oct 2, 2024 | Image GenerationMixture-of-Experts | —Unverified | 0 |
| The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs | Oct 2, 2024 | BenchmarkingHallucination | —Unverified | 0 |
| Open-RAG: Enhanced Retrieval-Augmented Reasoning with Open-Source Large Language Models | Oct 2, 2024 | Mixture-of-ExpertsNavigate | CodeCode Available | 2 |
| Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging | Oct 2, 2024 | DiversityMixture-of-Experts | —Unverified | 0 |
| MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards | Oct 1, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| UniAdapt: A Universal Adapter for Knowledge Calibration | Oct 1, 2024 | Mixture-of-ExpertsModel Editing | —Unverified | 0 |
| Robust Traffic Forecasting against Spatial Shift over Years | Oct 1, 2024 | AttributeMixture-of-Experts | CodeCode Available | 0 |
| MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning | Sep 30, 2024 | Mixture-of-ExpertsOptical Character Recognition (OCR) | —Unverified | 0 |
| IDEA: An Inverse Domain Expert Adaptation Based Active DNN IP Protection Method | Sep 29, 2024 | Domain AdaptationMixture-of-Experts | —Unverified | 0 |
| CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet Upcycling | Sep 28, 2024 | image-classificationImage Classification | CodeCode Available | 2 |
| SciDFM: A Large Language Model with Mixture-of-Experts for Science | Sep 27, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction | Sep 26, 2024 | Mixture-of-ExpertsPrediction | CodeCode Available | 1 |
| Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE | Sep 26, 2024 | image-classificationImage Classification | CodeCode Available | 1 |
| Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts | Sep 24, 2024 | Computational EfficiencyMixture-of-Experts | CodeCode Available | 4 |
| Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for 6G Networks | Sep 24, 2024 | Mixture-of-ExpertsSemantic Communication | —Unverified | 0 |
| Leveraging Mixture of Experts for Improved Speech Deepfake Detection | Sep 24, 2024 | DeepFake DetectionFace Swapping | —Unverified | 0 |
| Boosting Code-Switching ASR with Mixture of Experts Enhanced Speech-Conditioned LLM | Sep 24, 2024 | Automatic Speech RecognitionAutomatic Speech Recognition (ASR) | —Unverified | 0 |
| Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond | Sep 23, 2024 | Language ModellingLarge Language Model | —Unverified | 0 |
| A Gated Residual Kolmogorov-Arnold Networks for Mixtures of Experts | Sep 23, 2024 | Kolmogorov-Arnold NetworksMixture-of-Experts | CodeCode Available | 0 |
| Routing in Sparsely-gated Language Models responds to Context | Sep 21, 2024 | DecoderMixture-of-Experts | —Unverified | 0 |
| Multi-omics data integration for early diagnosis of hepatocellular carcinoma (HCC) using machine learning | Sep 20, 2024 | Data IntegrationMixture-of-Experts | —Unverified | 0 |
| On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists | Sep 20, 2024 | Federated LearningLanguage Modeling | CodeCode Available | 0 |
| Robust Audiovisual Speech Recognition Models with Mixture-of-Experts | Sep 19, 2024 | Mixture-of-ExpertsRobust Speech Recognition | —Unverified | 0 |
| Mixture of Diverse Size Experts | Sep 18, 2024 | Mixture-of-Experts | —Unverified | 0 |
| GRIN: GRadient-INformed MoE | Sep 18, 2024 | HellaSwagHumanEval | —Unverified | 0 |