SOTAVerified

Learning to Upscale 3D Segmentations in Neuroimaging

2025-11-22Code Available0· sign in to hype

Xiaoling Hu, Peirong Liu, Dina Zemlyanker, Jonathan Williams Ramirez, Oula Puonti, Juan Eugenio Iglesias

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Obtaining high-resolution (HR) segmentations from coarse annotations is a pervasive challenge in computer vision. Applications include inferring pixel-level segmentations from token-level labels in vision transformers, upsampling coarse masks to full resolution, and transferring annotations from legacy low-resolution (LR) datasets to modern HR imagery. These challenges are especially acute in 3D neuroimaging, where manual labeling is costly and resolutions continually increase. We propose a scalable framework that generalizes across resolutions and domains by regressing signed distance maps, enabling smooth, boundary-aware supervision. Crucially, our model predicts one class at a time, which substantially reduces memory usage during training and inference (critical for large 3D volumes) and naturally supports generalization to unseen classes. Generalization is further improved through training on synthetic, domain-randomized data. We validate our approach on ultra-high-resolution (UHR) human brain MRI (~100 μm), where most existing methods operate at 1 mm resolution. Our framework effectively upsamples such standard-resolution segmentations to UHR detail. Results on synthetic and real data demonstrate superior scalability and generalization compared to conventional segmentation methods. Code is available at: https://github.com/HuXiaoling/Learn2Upscale.

Reproductions