SOTAVerified

Reinforcement Learning Increases Wind Farm Power Production by Enabling Closed-Loop Collaborative Control

2025-06-25Code Available0· sign in to hype

Andrew Mole, Max Weissenbacher, Georgios Rigas, Sylvain Laizet

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Traditional wind farm control operates each turbine independently to maximize individual power output. However, coordinated wake steering across the entire farm can substantially increase the combined wind farm energy production. Although dynamic closed-loop control has proven effective in flow control applications, wind farm optimization has relied primarily on static, low-fidelity simulators that ignore critical turbulent flow dynamics. In this work, we present the first reinforcement learning (RL) controller integrated directly with high-fidelity large-eddy simulation (LES), enabling real-time response to atmospheric turbulence through collaborative, dynamic control strategies. Our RL controller achieves a 4.30% increase in wind farm power output compared to baseline operation, nearly doubling the 2.19% gain from static optimal yaw control obtained through Bayesian optimization. These results establish dynamic flow-responsive control as a transformative approach to wind farm optimization, with direct implications for accelerating renewable energy deployment to net-zero targets.

Tasks

Reproductions