| Stealing User Prompts from Mixture of Experts | Oct 30, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Efficient and Interpretable Grammatical Error Correction with Mixture of Experts | Oct 30, 2024 | Grammatical Error CorrectionMixture-of-Experts | CodeCode Available | 0 |
| MALoRA: Mixture of Asymmetric Low-Rank Adaptation for Enhanced Multi-Task Learning | Oct 30, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| ProMoE: Fast MoE-based LLM Serving using Proactive Caching | Oct 29, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging | Oct 29, 2024 | Mixture-of-ExpertsMulti-Task Learning | —Unverified | 0 |
| Neural Experts: Mixture of Experts for Implicit Neural Representations | Oct 29, 2024 | Image ReconstructionMixture-of-Experts | —Unverified | 0 |
| Efficient Mixture-of-Expert for Video-based Driver State and Physiological Multi-task Estimation in Conditional Autonomous Driving | Oct 28, 2024 | Autonomous DrivingMixture-of-Experts | —Unverified | 0 |
| FinTeamExperts: Role Specialized MOEs For Financial Analysis | Oct 28, 2024 | Financial AnalysisMixture-of-Experts | —Unverified | 0 |
| Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis | Oct 25, 2024 | High-Level SynthesisMixture-of-Experts | CodeCode Available | 0 |
| MoMQ: Mixture-of-Experts Enhances Multi-Dialect Query Generation across Relational and Non-Relational Databases | Oct 24, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Mixture of Parrots: Experts improve memorization more than reasoning | Oct 24, 2024 | MathMemorization | —Unverified | 0 |
| ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference | Oct 23, 2024 | Computational EfficiencyCPU | —Unverified | 0 |
| Robust and Explainable Depression Identification from Speech Using Vowel-Based Ensemble Learning Approaches | Oct 23, 2024 | Ensemble LearningMixture-of-Experts | —Unverified | 0 |
| MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning | Oct 23, 2024 | MathMixture-of-Experts | —Unverified | 0 |
| Faster Language Models with Better Multi-Token Prediction Using Tensor Decomposition | Oct 23, 2024 | Code GenerationMixture-of-Experts | —Unverified | 0 |
| Optimizing Mixture-of-Experts Inference Time Combining Model Deployment and Communication Scheduling | Oct 22, 2024 | AllGPU | —Unverified | 0 |
| ViMoE: An Empirical Study of Designing Vision Mixture-of-Experts | Oct 21, 2024 | image-classificationImage Classification | —Unverified | 0 |
| CartesianMoE: Boosting Knowledge Sharing among Experts via Cartesian Product Routing in Mixture-of-Experts | Oct 21, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| MENTOR: Mixture-of-Experts Network with Task-Oriented Perturbation for Visual Reinforcement Learning | Oct 19, 2024 | Deep Reinforcement LearningMixture-of-Experts | —Unverified | 0 |
| Enhancing Generalization in Sparse Mixture of Experts Models: The Case for Increased Expert Activation in Compositional Tasks | Oct 17, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Understanding Expert Structures on Minimax Parameter Estimation in Contaminated Mixture of Experts | Oct 16, 2024 | Mixture-of-Expertsparameter estimation | —Unverified | 0 |
| On the Risk of Evidence Pollution for Malicious Social Text Detection in the Era of LLMs | Oct 16, 2024 | Mixture-of-ExpertsText Detection | —Unverified | 0 |
| EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference | Oct 16, 2024 | Computational EfficiencyLarge Language Model | —Unverified | 0 |
| MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router | Oct 15, 2024 | Knowledge DistillationLanguage Modeling | —Unverified | 0 |
| Transformer Layer Injection: A Novel Approach for Efficient Upscaling of Large Language Models | Oct 15, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Quadratic Gating Functions in Mixture of Experts: A Statistical Insight | Oct 15, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| Scalable Multi-Domain Adaptation of Language Models using Modular Experts | Oct 14, 2024 | Domain AdaptationGeneral Knowledge | —Unverified | 0 |
| Learning to Ground VLMs without Forgetting | Oct 14, 2024 | DecoderLanguage Modelling | —Unverified | 0 |
| Ada-K Routing: Boosting the Efficiency of MoE-based LLMs | Oct 14, 2024 | Computational EfficiencyMixture-of-Experts | —Unverified | 0 |
| ContextWIN: Whittle Index Based Mixture-of-Experts Neural Model For Restless Bandits Via Deep RL | Oct 13, 2024 | Decision MakingMixture-of-Experts | —Unverified | 0 |
| MoIN: Mixture of Introvert Experts to Upcycle an LLM | Oct 13, 2024 | GPULanguage Modeling | —Unverified | 0 |
| GETS: Ensemble Temperature Scaling for Calibration in Graph Neural Networks | Oct 12, 2024 | Mixture-of-Experts | —Unverified | 0 |
| AT-MoE: Adaptive Task-planning Mixture of Experts via LoRA Approach | Oct 12, 2024 | Mixture-of-ExpertsTask Planning | —Unverified | 0 |
| Upcycling Large Language Models into Mixture of Experts | Oct 10, 2024 | Mixture-of-ExpertsMMLU | —Unverified | 0 |
| More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing | Oct 10, 2024 | image-classificationImage Classification | CodeCode Available | 0 |
| Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training | Oct 10, 2024 | Mixture-of-ExpertsVisual Question Answering | —Unverified | 0 |
| Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs | Oct 9, 2024 | Common Sense ReasoningMixture-of-Experts | —Unverified | 0 |
| Toward generalizable learning of all (linear) first-order methods via memory augmented Transformers | Oct 8, 2024 | AllMixture-of-Experts | —Unverified | 0 |
| Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models | Oct 8, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Probing the Robustness of Theory of Mind in Large Language Models | Oct 8, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Multimodal Fusion Strategies for Mapping Biophysical Landscape Features | Oct 7, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| Realizing Video Summarization from the Path of Language-based Semantic Understanding | Oct 6, 2024 | Mixture-of-ExpertsVideo Generation | —Unverified | 0 |
| Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding with LLMs | Oct 4, 2024 | Contrastive LearningDenoising | —Unverified | 0 |
| A Dynamic Approach to Stock Price Prediction: Comparing RNN and Mixture of Experts Models Across Different Volatility Profiles | Oct 4, 2024 | Mixture-of-ExpertsStock Price Prediction | —Unverified | 0 |
| On Expert Estimation in Hierarchical Mixture of Experts: Beyond Softmax Gating Functions | Oct 3, 2024 | image-classificationImage Classification | —Unverified | 0 |
| Neutral residues: revisiting adapters for model extension | Oct 3, 2024 | Domain AdaptationLanguage Modelling | —Unverified | 0 |
| Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping | Oct 3, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts | Oct 3, 2024 | Mixture-of-Expertsparameter estimation | CodeCode Available | 0 |
| MLP-KAN: Unifying Deep Representation and Function Learning | Oct 3, 2024 | Kolmogorov-Arnold NetworksMixture-of-Experts | CodeCode Available | 0 |
| The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs | Oct 2, 2024 | BenchmarkingHallucination | —Unverified | 0 |