| MoIN: Mixture of Introvert Experts to Upcycle an LLM | Oct 13, 2024 | GPULanguage Modeling | —Unverified | 0 |
| GETS: Ensemble Temperature Scaling for Calibration in Graph Neural Networks | Oct 12, 2024 | Mixture-of-Experts | —Unverified | 0 |
| AT-MoE: Adaptive Task-planning Mixture of Experts via LoRA Approach | Oct 12, 2024 | Mixture-of-ExpertsTask Planning | —Unverified | 0 |
| Upcycling Large Language Models into Mixture of Experts | Oct 10, 2024 | Mixture-of-ExpertsMMLU | —Unverified | 0 |
| More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing | Oct 10, 2024 | image-classificationImage Classification | CodeCode Available | 0 |
| Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training | Oct 10, 2024 | Mixture-of-ExpertsVisual Question Answering | —Unverified | 0 |
| Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs | Oct 9, 2024 | Common Sense ReasoningMixture-of-Experts | —Unverified | 0 |
| Toward generalizable learning of all (linear) first-order methods via memory augmented Transformers | Oct 8, 2024 | AllMixture-of-Experts | —Unverified | 0 |
| Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models | Oct 8, 2024 | Mixture-of-Experts | —Unverified | 0 |
| Probing the Robustness of Theory of Mind in Large Language Models | Oct 8, 2024 | Mixture-of-Experts | —Unverified | 0 |