SOTAVerified

Molecule Captioning

Molecular description generation entails the creation of a detailed textual depiction illuminating the structure, properties, biological activity, and applications of a molecule based on its molecular descriptors. It furnishes chemists and biologists with a swift conduit to essential molecular information, thus efficiently guiding their research and experiments.

Papers

Showing 125 of 25 papers

TitleStatusHype
BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task TuningCode2
MolFM: A Multimodal Molecular Foundation ModelCode2
Towards 3D Molecule-Text Interpretation in Language ModelsCode2
Vector-ICL: In-context Learning with Continuous Vector RepresentationsCode1
3D-MolT5: Leveraging Discrete Structural Information for Molecule-Text ModelingCode1
Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT PerspectiveCode1
From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule DiscoveryCode1
Atomas: Hierarchical Alignment on Molecule-Text for Unified Molecule Understanding and GenerationCode1
GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and TextCode1
Graph-based Molecular Representation LearningCode1
InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug DiscoveryCode1
A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural LanguageCode1
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal AdapterCode1
ReactXT: Understanding Molecular "Reaction-ship" via Reaction-Contextualized Molecule-Text PretrainingCode1
Translation between Molecules and Natural LanguageCode1
Unifying Molecular and Textual Representations via Multi-task Language ModellingCode1
BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language AssociationsCode1
Property Enhanced Instruction Tuning for Multi-task Molecule Generation with Large Language ModelsCode1
MolReFlect: Towards Fine-grained In-Context Alignment between Molecules and Texts0
Mol-LLM: Multimodal Generalist Molecular LLM with Improved Graph Utilization0
XMolCap: Advancing Molecular Captioning through Multimodal Fusion and Explainable Graph Neural NetworksCode0
Automatic Annotation Augmentation Boosts Translation between Molecules and Natural LanguageCode0
GeomCLIP: Contrastive Geometry-Text Pre-training for MoleculesCode0
Mol2Lang-VLM: Vision- and Text-Guided Generative Pre-trained Language Models for Advancing Molecule Captioning through Multimodal FusionCode0
MolXPT: Wrapping Molecules with Text for Generative Pre-trainingCode0
Show:102550

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Mol-LLM (Mistral-Instruct-v0.2)BLEU-273.2Unverified
2Mol-LLM (LLaMA2-Chat)BLEU-272.7Unverified
3MolReFlectBLEU-267.6Unverified
4BioT5+BLEU-266.6Unverified
5BioT5BLEU-263.5Unverified
6Text+Chem T5-augm-BaseBLEU-262.5Unverified
7XMolCapBLEU-262Unverified
8MolCA, Galac1.3BBLEU-262Unverified
9MolCA, Galac125MBLEU-261.6Unverified
10Mol2Lang-VLMBLEU-261.2Unverified
#ModelMetricClaimedVerifiedStatus
1Mol2Lang-VLMBLEU-277.7Unverified
2XMolCapBLEU-277.4Unverified
3MolT5-LargeBLEU-276.9Unverified
4Nach0BLEU-273.81Unverified
5MolT5-BaseBLEU-273.8Unverified
6MolT5-SmallBLEU-270.9Unverified