TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation
Vladimir Iglovikov, Alexey Shvets
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ternaus/TernausNetOfficialIn paperpytorch★ 0
- github.com/tarolangner/ukb_segmentationpytorch★ 18
- github.com/intsco/am-segmentationpytorch★ 4
- github.com/od-crypto/aerialpytorch★ 0
- github.com/yassineAlouini/airbus_ship_detectionnone★ 0
- github.com/yxinjiang/Unet-for-foreground-segmentationpytorch★ 0
- github.com/trupewate/lung_segmentation_tutorialpytorch★ 0
- github.com/IzPerfect/CT_Image_Segmentationnone★ 0
- github.com/ternaus/TernausNetV2pytorch★ 0
- github.com/MatusChladek/Semantic-Tissue-Segmentationpytorch★ 0
Abstract
Pixel-wise image segmentation is demanding task in computer vision. Classical U-Net architectures composed of encoders and decoders are very popular for segmentation of medical images, satellite images etc. Typically, neural network initialized with weights from a network pre-trained on a large data set like ImageNet shows better performance than those trained from scratch on a small dataset. In some practical applications, particularly in medicine and traffic safety, the accuracy of the models is of utmost importance. In this paper, we demonstrate how the U-Net type architecture can be improved by the use of the pre-trained encoder. Our code and corresponding pre-trained weights are publicly available at https://github.com/ternaus/TernausNet. We compare three weight initialization schemes: LeCun uniform, the encoder with weights from VGG11 and full network trained on the Carvana dataset. This network architecture was a part of the winning solution (1st out of 735) in the Kaggle: Carvana Image Masking Challenge.