SOTAVerified

Self-supervised Learning of Dense Hierarchical Representations for Medical Image Segmentation

2024-01-12Code Available0· sign in to hype

Eytan Kats, Jochen G. Hirsch, Mattias P. Heinrich

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper demonstrates a self-supervised framework for learning voxel-wise coarse-to-fine representations tailored for dense downstream tasks. Our approach stems from the observation that existing methods for hierarchical representation learning tend to prioritize global features over local features due to inherent architectural bias. To address this challenge, we devise a training strategy that balances the contributions of features from multiple scales, ensuring that the learned representations capture both coarse and fine-grained details. Our strategy incorporates 3-fold improvements: (1) local data augmentations, (2) a hierarchically balanced architecture, and (3) a hybrid contrastive-restorative loss function. We evaluate our method on CT and MRI data and demonstrate that our new approach particularly beneficial for fine-tuning with limited annotated data and consistently outperforms the baseline counterpart in linear evaluation settings.

Tasks

Reproductions