SOTAVerified

Using Deep Reinforcement Learning to Generate Rationales for Molecules

2018-01-01ICLR 2018Unverified0· sign in to hype

Benson Chen, Connor Coley, Regina Barzilay, Tommi Jaakkola

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep learning algorithms are increasingly used in modeling chemical processes. However, black box predictions without rationales have limited used in practical applications, such as drug design. To this end, we learn to identify molecular substructures -- rationales -- that are associated with the target chemical property (e.g., toxicity). The rationales are learned in an unsupervised fashion, requiring no additional information beyond the end-to-end task. We formulate this problem as a reinforcement learning problem over the molecular graph, parametrized by two convolution networks corresponding to the rationale selection and prediction based on it, where the latter induces the reward function. We evaluate the approach on two benchmark toxicity datasets. We demonstrate that our model sustains high performance under the additional constraint that predictions strictly follow the rationales. Additionally, we validate the extracted rationales through comparison against those described in chemical literature and through synthetic experiments.

Tasks

Reproductions