SOTAVerified

Neural Operator based Reinforcement Learning for Control of first-order PDEs with Spatially-Varying State Delay

2025-01-30Code Available0· sign in to hype

Jiaqi Hu, Jie Qi, Jing Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Control of distributed parameter systems affected by delays is a challenging task, particularly when the delays depend on spatial variables. The idea of integrating analytical control theory with learning-based control within a unified control scheme is becoming increasingly promising and advantageous. In this paper, we address the problem of controlling an unstable first-order hyperbolic PDE with spatially-varying delays by combining PDE backstepping control strategies and deep reinforcement learning (RL). To eliminate the assumption on the delay function required for the backstepping design, we propose a soft actor-critic (SAC) architecture incorporating a DeepONet to approximate the backstepping controller. The DeepONet extracts features from the backstepping controller and feeds them into the policy network. In simulations, our algorithm outperforms the baseline SAC without prior backstepping knowledge and the analytical controller.

Tasks

Reproductions