Utilizing Large Language Models in an iterative paradigm with domain feedback for zero-shot molecule optimization
Khiem Le, Nitesh V. Chawla
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Molecule optimization is a critical task in drug discovery to optimize desired properties of a given molecule. Despite Large Language Models (LLMs) holding the potential to efficiently simulate this task by using natural language to direct the optimization, straightforwardly utilizing them shows limited performance. In this work, we facilitate utilizing LLMs in an iterative paradigm by proposing a simple yet effective domain feedback provider, namely Re^2DF. In detail, Re^2DF harnesses an external toolkit, RDKit, to handle the molecule hallucination, if the modified molecule is chemically invalid. Otherwise, Re^2DF verifies whether the modified molecule meets the objective, if not, its desired properties are computed and compared to the original one, establishing reliable domain feedback with correct direction and distance towards the objective to explicitly guide the LLM to refine the modified molecule. We conduct experiments across both single- and multi-property objectives with 2 thresholds, where Re^2DF shows significant improvements. Notably, for 20 single-property objectives, Re^2DF enhances Hit ratio by 16.96% and 20.76% under loose (l) and strict (s) thresholds, respectively. For 32 multi-property objectives, Re^2DF enhances Hit ratio by 6.04% and 5.25%.