DeMansia: Mamba Never Forgets Any Tokens
Ricky Fang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/catalpaaa/demansiaOfficialIn paperpytorch★ 8
Abstract
This paper examines the mathematical foundations of transformer architectures, highlighting their limitations particularly in handling long sequences. We explore prerequisite models such as Mamba, Vision Mamba (ViM), and LV-ViT that pave the way for our proposed architecture, DeMansia. DeMansia integrates state space models with token labeling techniques to enhance performance in image classification tasks, efficiently addressing the computational challenges posed by traditional transformers. The architecture, benchmark, and comparisons with contemporary models demonstrate DeMansia's effectiveness. The implementation of this paper is available on GitHub at https://github.com/catalpaaa/DeMansia