SOTAVerified

Large Language Models Can Achieve Explainable and Training-Free One-shot HRRP ATR

2025-06-03Unverified0· sign in to hype

Lingfeng Chen, Panhe Hu, Zhiliang Pan, Qi Liu, Zhen Liu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This letter introduces a pioneering, training-free and explainable framework for High-Resolution Range Profile (HRRP) automatic target recognition (ATR) utilizing large-scale pre-trained Large Language Models (LLMs). Diverging from conventional methods requiring extensive task-specific training or fine-tuning, our approach converts one-dimensional HRRP signals into textual scattering center representations. Prompts are designed to align LLMs' semantic space for ATR via few-shot in-context learning, effectively leveraging its vast pre-existing knowledge without any parameter update. We make our codes publicly available to foster research into LLMs for HRRP ATR.

Tasks

Reproductions