SOTAVerified

Visual Alignment of Medical Vision-Language Models for Grounded Radiology Report Generation

2026-03-13Unverified0· sign in to hype

Sarosij Bose, Ravi K. Rajendran, Biplob Debnath, Konstantinos Karydis, Amit K. Roy-Chowdhury, Srimat Chakradhar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Radiology Report Generation (RRG) is a critical step toward automating healthcare workflows, facilitating accurate patient assessments, and reducing the workload of medical professionals. Despite recent progress in Large Medical Vision-Language Models (Med-VLMs), generating radiology reports that are both visually grounded and clinically accurate remains a significant challenge. Existing approaches often rely on large labeled corpora for pre-training, costly task-specific preference data, or retrieval-based knowledge. However, these strategies do not adequately mitigate hallucinations arising from poor cross-modal alignment between visual and linguistic representations. To address these limitations, we propose VALOR: Visual Alignment of Medical Vision-Language Models for GrOunded Radiology Report Generation, which tackles visual hallucinations through two complementary reasoning stages: (1) Clinically Informed Textual Reasoning guides the model with verifiable natural language and clinical metric rewards to produce semantically complete reports with precise medical terminology. (2) Self-Supervised Visual Reasoning leverages a frozen domain expert to compute image-text similarity scores between the input chest X-ray and generated candidates, converting these into rank-normalized advantages that explicitly steer the policy toward visually grounded outputs, requiring no preference pairs, retrieval databases, or additional annotations. Extensive experiments on multiple benchmarks demonstrate that VALOR substantially improves generation quality, as well as clinical accuracy which are visually grounded, achieving significant performance gains over state-of-the-art medical report generation benchmarks.

Reproductions