SOTAVerified

Bridging the Gap between Learning and Inference for Diffusion-Based Molecule Generation

2024-11-08Code Available0· sign in to hype

Peidong Liu, Wenbo Zhang, Xue Zhe, Jiancheng Lv, Xianggen Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The efficacy of diffusion models in generating a spectrum of data modalities, including images, text, and videos, has spurred inquiries into their utility in molecular generation, yielding significant advancements in the field. However, the molecular generation process with diffusion models involves multiple autoregressive steps over a finite time horizon, leading to exposure bias issues inherently. To address the exposure bias issue, we propose a training framework named GapDiff. The core idea of GapDiff is to utilize model-predicted conformations as ground truth probabilistically during training, aiming to mitigate the data distributional disparity between training and inference, thereby enhancing the affinity of generated molecules. We conduct experiments using a 3D molecular generation model on the CrossDocked2020 dataset, and the vina energy and diversity demonstrate the potency of our framework with superior affinity. GapDiff is available at https://github.com/HUGHNew/gapdiff.

Tasks

Reproductions