SOTAVerified

Hierarchical RL-MPC for Demand Response Scheduling

2025-02-19Unverified0· sign in to hype

Maximilian Bloor, Ehecatl Antonio del Rio Chanona, Calvin Tsay

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents a hierarchical framework for demand response optimization in air separation units (ASUs) that combines reinforcement learning (RL) with linear model predictive control (LMPC). We investigate two control architectures: a direct RL approach and a control-informed methodology where an RL agent provides setpoints to a lower-level LMPC. The proposed RL-LMPC framework demonstrates improved sample efficiency during training and better constraint satisfaction compared to direct RL control. Using an industrial ASU case study, we show that our approach successfully manages operational constraints while optimizing electricity costs under time-varying pricing. Results indicate that the RL-LMPC architecture achieves comparable economic performance to direct RL while providing better robustness and requiring fewer training samples to converge. The framework offers a practical solution for implementing flexible operation strategies in process industries, bridging the gap between data-driven methods and traditional control approaches.

Tasks

Reproductions