SOTAVerified

Deep reinforcement learning uncovers processes for separating azeotropic mixtures without prior knowledge

2023-10-10Code Available0· sign in to hype

Quirin Göttl, Jonathan Pirnay, Jakob Burger, Dominik G. Grimm

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Process synthesis in chemical engineering is a complex planning problem due to vast search spaces, continuous parameters and the need for generalization. Deep reinforcement learning agents, trained without prior knowledge, have shown to outperform humans in various complex planning problems in recent years. Existing work on reinforcement learning for flowsheet synthesis shows promising concepts, but focuses on narrow problems in a single chemical system, limiting its practicality. We present a general deep reinforcement learning approach for flowsheet synthesis. We demonstrate the adaptability of a single agent to the general task of separating binary azeotropic mixtures. Without prior knowledge, it learns to craft near-optimal flowsheets for multiple chemical systems, considering different feed compositions and conceptual approaches. On average, the agent can separate more than 99% of the involved materials into pure components, while autonomously learning fundamental process engineering paradigms. This highlights the agent's planning flexibility, an encouraging step toward true generality.

Tasks

Reproductions