SOTAVerified

DeMansia: Mamba Never Forgets Any Tokens

2024-08-04Code Available0· sign in to hype

Ricky Fang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper examines the mathematical foundations of transformer architectures, highlighting their limitations particularly in handling long sequences. We explore prerequisite models such as Mamba, Vision Mamba (ViM), and LV-ViT that pave the way for our proposed architecture, DeMansia. DeMansia integrates state space models with token labeling techniques to enhance performance in image classification tasks, efficiently addressing the computational challenges posed by traditional transformers. The architecture, benchmark, and comparisons with contemporary models demonstrate DeMansia's effectiveness. The implementation of this paper is available on GitHub at https://github.com/catalpaaa/DeMansia

Tasks

Reproductions