SOTAVerified

Segformer++: Efficient Token-Merging Strategies for High-Resolution Semantic Segmentation

2024-05-23Code Available1· sign in to hype

Daniel Kienzle, Marco Kantonis, Robin Schön, Rainer Lienhart

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Utilizing transformer architectures for semantic segmentation of high-resolution images is hindered by the attention's quadratic computational complexity in the number of tokens. A solution to this challenge involves decreasing the number of tokens through token merging, which has exhibited remarkable enhancements in inference speed, training efficiency, and memory utilization for image classification tasks. In this paper, we explore various token merging strategies within the framework of the Segformer architecture and perform experiments on multiple semantic segmentation and human pose estimation datasets. Notably, without model re-training, we, for example, achieve an inference acceleration of 61% on the Cityscapes dataset while maintaining the mIoU performance. Consequently, this paper facilitates the deployment of transformer-based architectures on resource-constrained devices and in real-time applications.

Tasks

Reproductions