SOTAVerified

Bayesian Inverse Reinforcement Learning for Collective Animal Movement

2020-09-08Code Available0· sign in to hype

Toryn L. J. Schafer, Christopher K. Wikle, Mevin B. Hooten

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Agent-based methods allow for defining simple rules that generate complex group behaviors. The governing rules of such models are typically set a priori and parameters are tuned from observed behavior trajectories. Instead of making simplifying assumptions across all anticipated scenarios, inverse reinforcement learning provides inference on the short-term (local) rules governing long term behavior policies by using properties of a Markov decision process. We use the computationally efficient linearly-solvable Markov decision process to learn the local rules governing collective movement for a simulation of the self propelled-particle (SPP) model and a data application for a captive guppy population. The estimation of the behavioral decision costs is done in a Bayesian framework with basis function smoothing. We recover the true costs in the SPP simulation and find the guppies value collective movement more than targeted movement toward shelter.

Tasks

Reproductions