SOTAVerified

Vision Mamba for Classification of Breast Ultrasound Images

2024-07-04Code Available1· sign in to hype

Ali Nasiri-Sarvi, Mahdi S. Hosseini, Hassan Rivaz

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Mamba-based models, VMamba and Vim, are a recent family of vision encoders that offer promising performance improvements in many computer vision tasks. This paper compares Mamba-based models with traditional Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) using the breast ultrasound BUSI dataset and Breast Ultrasound B dataset. Our evaluation, which includes multiple runs of experiments and statistical significance analysis, demonstrates that some of the Mamba-based architectures often outperform CNN and ViT models with statistically significant results. For example, in the B dataset, the best Mamba-based models have a 1.98\% average AUC and a 5.0\% average Accuracy improvement compared to the best non-Mamba-based model in this study. These Mamba-based models effectively capture long-range dependencies while maintaining some inductive biases, making them suitable for applications with limited data. The code is available at https://github.com/anasiri/BU-Mamba

Tasks

Reproductions