SOTAVerified

Enhancing Analogical Reasoning in the Abstraction and Reasoning Corpus via Model-Based RL

2024-08-27Unverified0· sign in to hype

JIhwan Lee, Woochang Sim, Sejin Kim, Sundong Kim

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper demonstrates that model-based reinforcement learning (model-based RL) is a suitable approach for the task of analogical reasoning. We hypothesize that model-based RL can solve analogical reasoning tasks more efficiently through the creation of internal models. To test this, we compared DreamerV3, a model-based RL method, with Proximal Policy Optimization, a model-free RL method, on the Abstraction and Reasoning Corpus (ARC) tasks. Our results indicate that model-based RL not only outperforms model-free RL in learning and generalizing from single tasks but also shows significant advantages in reasoning across similar tasks.

Tasks

Reproductions