SOTAVerified

Pre-Trained LLM is a Semantic-Aware and Generalizable Segmentation Booster

2025-06-22Code Available2· sign in to hype

Fenghe Tang, Wenxin Ma, ZhiYang He, Xiaodong Tao, Zihang Jiang, S. Kevin Zhou

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

With the advancement of Large Language Model (LLM) for natural language processing, this paper presents an intriguing finding: a frozen pre-trained LLM layer can process visual tokens for medical image segmentation tasks. Specifically, we propose a simple hybrid structure that integrates a pre-trained, frozen LLM layer within the CNN encoder-decoder segmentation framework (LLM4Seg). Surprisingly, this design improves segmentation performance with a minimal increase in trainable parameters across various modalities, including ultrasound, dermoscopy, polypscopy, and CT scans. Our in-depth analysis reveals the potential of transferring LLM's semantic awareness to enhance segmentation tasks, offering both improved global understanding and better local modeling capabilities. The improvement proves robust across different LLMs, validated using LLaMA and DeepSeek.

Tasks

Reproductions