SOTAVerified

Enhancing the efficiency of protein language models with minimal wet-lab data through few-shot learning

2024-02-03Unverified0· sign in to hype

Ziyi Zhou, Liang Zhang, Yuanxi Yu, Mingchen Li, Liang Hong, Pan Tan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Accurately modeling the protein fitness landscapes holds great importance for protein engineering. Recently, due to their capacity and representation ability, pre-trained protein language models have achieved state-of-the-art performance in predicting protein fitness without experimental data. However, their predictions are limited in accuracy as well as interpretability. Furthermore, such deep learning models require abundant labeled training examples for performance improvements, posing a practical barrier. In this work, we introduce FSFP, a training strategy that can effectively optimize protein language models under extreme data scarcity. By combining the techniques of meta-transfer learning, learning to rank, and parameter-efficient fine-tuning, FSFP can significantly boost the performance of various protein language models using merely tens of labeled single-site mutants from the target protein. The experiments across 87 deep mutational scanning datasets underscore its superiority over both unsupervised and supervised approaches, revealing its potential in facilitating AI-guided protein design.

Tasks

Reproductions