SOTAVerified

Robustness Evaluation of Offline Reinforcement Learning for Robot Control Against Action Perturbations

2024-12-25Unverified0· sign in to hype

Shingo Ayabe, Takuto Otomo, Hiroshi Kera, Kazuhiko Kawamoto

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Offline reinforcement learning, which learns solely from datasets without environmental interaction, has gained attention. This approach, similar to traditional online deep reinforcement learning, is particularly promising for robot control applications. Nevertheless, its robustness against real-world challenges, such as joint actuator faults in robots, remains a critical concern. This study evaluates the robustness of existing offline reinforcement learning methods using legged robots from OpenAI Gym based on average episodic rewards. For robustness evaluation, we simulate failures by incorporating both random and adversarial perturbations, representing worst-case scenarios, into the joint torque signals. Our experiments show that existing offline reinforcement learning methods exhibit significant vulnerabilities to these action perturbations and are more vulnerable than online reinforcement learning methods, highlighting the need for more robust approaches in this field.

Tasks

Reproductions