ImageNet Classification with Deep Convolutional Neural Networks
Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
Code Available — Be the first to reproduce this paper.
ReproduceCode
- worksheets.codalab.org/worksheets/0xfafccca55b584e6eb1cf71979ad8e778Officialnone★ 0
- github.com/pytorch/visionpytorch★ 17,584
- github.com/open-mmlab/mmposepytorch★ 7,439
- github.com/PaddlePaddle/PaddleClaspaddle★ 5,788
- github.com/Mayurji/Image-Classification-PyTorchpytorch★ 219
- github.com/MindSpore-paper-code-3/code6/tree/main/Alexnetmindspore★ 0
- github.com/2023-MindSpore-1/ms-code-86mindspore★ 0
- gitlab.com/birder/birderpytorch★ 0
- github.com/code-implementation1/Code2/tree/main/Alexnetmindspore★ 0
- github.com/dansuh17/alexnet-pytorchpytorch★ 0
Abstract
We trained a large, deep convolutional neural network to classify the 1.3 million high-resolution images in the LSVRC-2010 ImageNet training set into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 39.7\% and 18.9\% which is considerably better than the previous state-of-the-art results. The neural network, which has 60 million parameters and 500,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and two globally connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of convolutional nets. To reduce overfitting in the globally connected layers we employed a new regularization method that proved to be very effective.