SOTAVerified

How Lightweight Can A Vision Transformer Be

2024-07-25Unverified0· sign in to hype

Jen Hong Tan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we explore a strategy that uses Mixture-of-Experts (MoE) to streamline, rather than augment, vision transformers. Each expert in an MoE layer is a SwiGLU feedforward network, where V and W2 are shared across the layer. No complex attention or convolutional mechanisms are employed. Depth-wise scaling is applied to progressively reduce the size of the hidden layer and the number of experts is increased in stages. Grouped query attention is used. We studied the proposed approach with and without pre-training on small datasets and investigated whether transfer learning works at this scale. We found that the architecture is competitive even at a size of 0.67M parameters.

Tasks

Reproductions