| GETS: Ensemble Temperature Scaling for Calibration in Graph Neural Networks | Oct 12, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Retraining-Free Merging of Sparse MoE via Hierarchical Clustering | Oct 11, 2024 | ClusteringLanguage Modeling | CodeCode Available | 1 |
| Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts | Oct 10, 2024 | Mixture-of-Experts | CodeCode Available | 2 |
| More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing | Oct 10, 2024 | image-classificationImage Classification | CodeCode Available | 0 |
| Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training | Oct 10, 2024 | Mixture-of-ExpertsVisual Question Answering | —Unverified | 0 |
| Upcycling Large Language Models into Mixture of Experts | Oct 10, 2024 | Mixture-of-ExpertsMMLU | —Unverified | 0 |
| Efficient Dictionary Learning with Switch Sparse Autoencoders | Oct 10, 2024 | Dictionary LearningMixture-of-Experts | CodeCode Available | 1 |
| MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts | Oct 9, 2024 | GPUMixture-of-Experts | CodeCode Available | 4 |
| Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs | Oct 9, 2024 | Common Sense ReasoningMixture-of-Experts | —Unverified | 0 |
| Toward generalizable learning of all (linear) first-order methods via memory augmented Transformers | Oct 8, 2024 | AllMixture-of-Experts | —Unverified | 0 |
| MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More | Oct 8, 2024 | Mixture-of-ExpertsQuantization | CodeCode Available | 2 |
| Aria: An Open Multimodal Native Mixture-of-Experts Model | Oct 8, 2024 | Instruction FollowingMixture-of-Experts | CodeCode Available | 5 |
| Probing the Robustness of Theory of Mind in Large Language Models | Oct 8, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models | Oct 8, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild | Oct 7, 2024 | BenchmarkingMixture-of-Experts | CodeCode Available | 1 |
| Multimodal Fusion Strategies for Mapping Biophysical Landscape Features | Oct 7, 2024 | Mixture-of-Experts | CodeCode Available | 0 |
| Realizing Video Summarization from the Path of Language-based Semantic Understanding | Oct 6, 2024 | Mixture-of-ExpertsVideo Generation | —Unverified | 0 |
| A Dynamic Approach to Stock Price Prediction: Comparing RNN and Mixture of Experts Models Across Different Volatility Profiles | Oct 4, 2024 | Mixture-of-ExpertsStock Price Prediction | —Unverified | 0 |
| Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding with LLMs | Oct 4, 2024 | Contrastive LearningDenoising | —Unverified | 0 |
| On Expert Estimation in Hierarchical Mixture of Experts: Beyond Softmax Gating Functions | Oct 3, 2024 | image-classificationImage Classification | —Unverified | 0 |
| MLP-KAN: Unifying Deep Representation and Function Learning | Oct 3, 2024 | Kolmogorov-Arnold NetworksMixture-of-Experts | CodeCode Available | 0 |
| Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices | Oct 3, 2024 | Mixture-of-Experts | CodeCode Available | 1 |
| Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping | Oct 3, 2024 | GPUMixture-of-Experts | —Unverified | 0 |
| Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts | Oct 3, 2024 | Mixture-of-Expertsparameter estimation | CodeCode Available | 0 |
| Neutral residues: revisiting adapters for model extension | Oct 3, 2024 | Domain AdaptationLanguage Modelling | —Unverified | 0 |