Solving the Rubik's Cube Without Human Knowledge
Stephen McAleer, Forest Agostinelli, Alexander Shmakov, Pierre Baldi
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Dalkio/RL_rubiksnone★ 0
- github.com/JasperBusschers/multi-objective-Rubik-s-cubepytorch★ 0
- github.com/nathangrinsztajn/rubiks_cubenone★ 0
- github.com/mkovalski/cs4995_cubetf★ 0
- github.com/nascarsayan/dl-rubiks-autodidactic-solvertf★ 0
- github.com/KeatonMueller/rl-cubepytorch★ 0
- github.com/robbiejones96/RubiksSolvertf★ 0
- github.com/AashrayAnand/rubiks-cube-reinforcement-learningnone★ 0
- github.com/kaletap/deep-cube-rlpytorch★ 0
Abstract
A generally intelligent agent must be able to teach itself how to solve problems in complex domains with minimal human supervision. Recently, deep reinforcement learning algorithms combined with self-play have achieved superhuman proficiency in Go, Chess, and Shogi without human data or domain knowledge. In these environments, a reward is always received at the end of the game, however, for many combinatorial optimization environments, rewards are sparse and episodes are not guaranteed to terminate. We introduce Autodidactic Iteration: a novel reinforcement learning algorithm that is able to teach itself how to solve the Rubik's Cube with no human assistance. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves -- less than or equal to solvers that employ human domain knowledge.