Translation between Molecules and Natural Language
Carl Edwards, Tuan Lai, Kevin Ros, Garrett Honke, Kyunghyun Cho, Heng Ji
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/blender-nlp/MolT5OfficialIn paperpytorch★ 193
Abstract
We present MolT5 - a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. MolT5 allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since MolT5 pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that MolT5-based models are able to generate outputs, both molecules and captions, which in many cases are high quality.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ChEBI-20 | MolT5-Large | BLEU-2 | 59.4 | — | Unverified |
| ChEBI-20 | MolT5-Base | BLEU-2 | 54 | — | Unverified |
| ChEBI-20 | MolT5-Small | BLEU-2 | 51.9 | — | Unverified |
| L+M-24 | MolT5-Large | BLEU-2 | 76.9 | — | Unverified |
| L+M-24 | MolT5-Base | BLEU-2 | 73.8 | — | Unverified |
| L+M-24 | MolT5-Small | BLEU-2 | 70.9 | — | Unverified |