3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
Özgün Çiçek, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, Olaf Ronneberger
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/black0017/MedicalZooPytorchpytorch★ 1,910
- github.com/aschethor/Teaching_Incompressible_Fluid_Dynamics_to_3D_CNNspytorch★ 59
- github.com/chenyizi086/wu.2023.sigspatialpytorch★ 15
- github.com/Shrajan/IB_U_Netspytorch★ 6
- github.com/Shrajan/AAAI-2022pytorch★ 5
- github.com/lyqcom/3d-unetmindspore★ 1
- github.com/gvtulder/elasticdeformtf★ 0
- github.com/jiajun169/mindspore_models/tree/main/Unet3dmindspore★ 0
- github.com/kilgore92/PyTorch-UNetpytorch★ 0
- github.com/qsyao/cuda_spatial_deformnone★ 0
Abstract
This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from Ronneberger et al. by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ScanNetV2 | UNet-Backbone | mAP @ 50 | 31.9 | — | Unverified |