| MoEC: Mixture of Experts Implicit Neural Compression | Dec 3, 2023 | Data CompressionMixture-of-Experts | —Unverified | 0 |
| Language-driven All-in-one Adverse Weather Removal | Dec 3, 2023 | AllDiversity | —Unverified | 0 |
| Omni-SMoLA: Boosting Generalist Multimodal Models with Soft Mixture of Low-rank Experts | Dec 1, 2023 | Chart Question AnsweringDocument AI | —Unverified | 0 |
| HOMOE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts | Nov 23, 2023 | Compositional Zero-Shot LearningMixture-of-Experts | —Unverified | 0 |
| Efficient Model Agnostic Approach for Implicit Neural Representation Based Arbitrary-Scale Image Super-Resolution | Nov 20, 2023 | Computational EfficiencyDecoder | —Unverified | 0 |
| Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts | Nov 19, 2023 | DiversityMixture-of-Experts | CodeCode Available | 1 |
| Memory Augmented Language Models through Mixture of Word Experts | Nov 15, 2023 | Mixture-of-Experts | —Unverified | 0 |
| Intentional Biases in LLM Responses | Nov 11, 2023 | Language ModelingLanguage Modelling | —Unverified | 0 |
| DAMEX: Dataset-aware Mixture-of-Experts for visual understanding of mixture-of-datasets | Nov 8, 2023 | Mixture-of-Expertsobject-detection | CodeCode Available | 1 |
| CAME: Competitively Learning a Mixture-of-Experts Model for First-stage Retrieval | Nov 6, 2023 | Mixture-of-ExpertsRetrieval | —Unverified | 0 |