SOTAVerified

Parallelized Training of Restricted Boltzmann Machines using Markov-Chain Monte Carlo Methods

2019-10-14Unverified0· sign in to hype

Pei Yang, Srinivas Varadharajan, Lucas A. Wilson, Don D. Smith II, John A. Lockman III, Vineet Gundecha, Quy Ta

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Restricted Boltzmann Machine (RBM) is a generative stochastic neural network that can be applied to collaborative filtering technique used by recommendation systems. Prediction accuracy of the RBM model is usually better than that of other models for recommendation systems. However, training the RBM model involves Markov-Chain Monte Carlo (MCMC) method, which is computationally expensive. In this paper, we have successfully applied distributed parallel training using Horovod framework to improve the training time of the RBM model. Our tests show that the distributed training approach of the RBM model has a good scaling efficiency. We also show that this approach effectively reduces the training time to little over 12 minutes on 64 CPU nodes compared to 5 hours on a single CPU node. This will make RBM models more practically applicable in recommendation systems.

Tasks

Reproductions