SOTAVerified

Channelized Axial Attention for Semantic Segmentation -- Considering Channel Relation within Spatial Attention for Semantic Segmentation

2021-01-19Code Available1· sign in to hype

Ye Huang, Di Kang, Wenjing Jia, Xiangjian He, Liu Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Spatial and channel attentions, modelling the semantic interdependencies in spatial and channel dimensions respectively, have recently been widely used for semantic segmentation. However, computing spatial and channel attentions separately sometimes causes errors, especially for those difficult cases. In this paper, we propose Channelized Axial Attention (CAA) to seamlessly integrate channel attention and spatial attention into a single operation with negligible computation overhead. Specifically, we break down the dot-product operation of the spatial attention into two parts and insert channel relation in between, allowing for independently optimized channel attention on each spatial location. We further develop grouped vectorization, which allows our model to run with very little memory consumption without slowing down the running speed. Comparative experiments conducted on multiple benchmark datasets, including Cityscapes, PASCAL Context, and COCO-Stuff, demonstrate that our CAA outperforms many state-of-the-art segmentation models (including dual attention) on all tested datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Cityscapes testCAA (ResNet-101)Mean IoU (class)82.6Unverified
COCO-Stuff testCAA (Efficientnet-B7)mIoU45.4Unverified
COCO-Stuff testCAA (ResNet-101)mIoU41.2Unverified
PASCAL ContextCAA + Simple decoder (Efficientnet-B7)mIoU60.5Unverified
PASCAL ContextCAA (Efficientnet-B7)mIoU60.1Unverified
PASCAL ContextCAA (ResNet-101)mIoU55Unverified

Reproductions