SOTAVerified

Exploring the Hidden Dimension in Accelerating Convolutional Neural Networks

2018-01-01ICLR 2018Unverified0· sign in to hype

Zhihao Jia, Sina Lin, Charles R. Qi, Alex Aiken

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks. DeePa optimizes parallelism at the granularity of each individual layer in the network. We present an elimination-based algorithm that finds an optimal parallelism configuration for every layer. Our evaluation shows that DeePa achieves up to 6.5× speedup compared to state-of-the-art deep learning frameworks and reduces data transfers by up to 23×.

Tasks

Reproductions