SOTAVerified

LViT: Language meets Vision Transformer in Medical Image Segmentation

2022-06-29Code Available2· sign in to hype

Zihan Li, Yunxiang Li, Qingde Li, Puyang Wang, Dazhou Guo, Le Lu, Dakai Jin, You Zhang, Qingqi Hong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient high-quality labeled data due to the prohibitive data annotation cost. To alleviate this limitation, we propose a new text-augmented medical image segmentation model LViT (Language meets Vision Transformer). In our LViT model, medical text annotation is incorporated to compensate for the quality deficiency in image data. In addition, the text information can guide to generate pseudo labels of improved quality in the semi-supervised learning. We also propose an Exponential Pseudo label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM) preserve local image features in semi-supervised LViT setting. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. For evaluation, we construct three multimodal medical segmentation datasets (image + text) containing X-rays and CT images. Experimental results show that our proposed LViT has superior segmentation performance in both fully-supervised and semi-supervised setting. The code and datasets are available at https://github.com/HUANGLIZI/LViT.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MoNuSegLViT-LF181.01Unverified
MoNuSegLViT-LWF180.66Unverified
MoNuSegUCTransNetF179.87Unverified
MoNuSegGTUNetF179.26Unverified
MoNuSegUNet++F177.01Unverified

Reproductions