LUCIDGames: onLine UnsCented Inverse Dynamic Games forAdaptive Trajectory Prediction and Planning
Simon Le Cleac’h, Mac Schwager and Zachary Manchester
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/RoboticExplorationLab/LUCIDGames.jlOfficialIn papernone★ 36
Abstract
Existing game-theoretic planning methods assumethat the robot knows the objective functions of the other agentsa prioriwhile, in practical scenarios, this is rarely the case.This paper introduces LUCIDGames, an inverse optimal controlalgorithm that is able to estimate the other agents’ objectivefunctions in real time, and incorporate those estimates onlineinto a receding-horizon game-theoretic planner. LUCIDGamessolves the inverse optimal control problem by recasting itin a recursive parameter-estimation framework. LUCIDGamesuses an unscented Kalman filter (UKF) to iteratively update aBayesian estimate of the other agents’ cost function parameters,improving that estimate online as more data is gathered fromthe other agents’ observed trajectories. The planner then takesaccount of the uncertainty in the Bayesian parameter estimatesof other agents by planning a trajectory for the robot subjectto uncertainty ellipse constraints. The algorithm assumes noexplicit communication or coordination between the robot andthe other agents in the environment. An MPC implementationof LUCIDGames demonstrates real-time performance on com-plex autonomous driving scenarios with an update frequencyof 40 Hz. Empirical results demonstrate that LUCIDGamesimproves the robot’s performance over existing game-theoreticand traditional MPC planning approaches. Our implementa-tion of LUCIDGames is available athttps://github.com/RoboticExplorationLab/LUCIDGames.jl