A guide to convolution arithmetic for deep learning
2016-03-23Code Available3· sign in to hype
Vincent Dumoulin, Francesco Visin
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/vdumoulin/conv_arithmeticOfficialIn papernone★ 0
- github.com/mrdbourke/tensorflow-deep-learningtf★ 5,869
- github.com/marbleton/FPGA_MNISTpytorch★ 10
- github.com/ashantanu/DCGANpytorch★ 0
- github.com/atriumlts/subpixeltf★ 0
- github.com/smosanu/acomMNISTtf★ 0
- github.com/JakobKallestad/InceptionV3-on-plankton-imagesnone★ 0
- github.com/ruiponte1990/svhntf★ 0
- github.com/thisiskhan/tensorflow-developer-certificate-machine-learning-kittf★ 0
- github.com/avillemin/GANspytorch★ 0
Abstract
We introduce a guide to help deep learning practitioners understand and manipulate convolutional neural network architectures. The guide clarifies the relationship between various properties (input shape, kernel shape, zero padding, strides and output shape) of convolutional, pooling and transposed convolutional layers, as well as the relationship between convolutional and transposed convolutional layers. Relationships are derived for various cases, and are illustrated in order to make them intuitive.