SOTAVerified

Cooperative multi-agent reinforcement learning for high-dimensional nonequilibrium control

2021-11-12Code Available0· sign in to hype

Shriram Chennakesavalu, Grant M. Rotskoff

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Experimental advances enabling high-resolution external control create new opportunities to produce materials with exotic properties. In this work, we investigate how a multi-agent reinforcement learning approach can be used to design external control protocols for self-assembly. We find that a fully decentralized approach performs remarkably well even with a "coarse" level of external control. More importantly, we see that a partially decentralized approach, where we include information about the local environment allows us to better control our system towards some target distribution. We explain this by analyzing our approach as a partially-observed Markov decision process. With a partially decentralized approach, the agent is able to act more presciently, both by preventing the formation of undesirable structures and by better stabilizing target structures as compared to a fully decentralized approach.

Tasks

Reproductions