Leveraging Protein Language Model Embeddings for Catalytic Turnover Prediction of Adenylate Kinase Orthologs in a Low-Data Regime
Duncan F. Muir, Parker Grosjean, Margaux M. Pinney, Michael J. Keiser
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/keiserlab/face-plmOfficialIn paperpytorch★ 1
Abstract
Accurate prediction of enzymatic activity from amino acid sequences could drastically accelerate enzyme engineering for applications such as bioremediation and therapeutics development. In recent years, Protein Language Model (PLM) embeddings have been increasingly leveraged as the input into sequence-to-function models. Here, we use consistently collected catalytic turnover observations for 175 orthologs of the enzyme Adenylate Kinase (ADK) as a test case to assess the use of PLMs and their embeddings in enzyme kinetic prediction tasks. In this study, we show that nonlinear probing of PLM embeddings outperforms baseline embeddings (one-hot-encoding) and the specialized k_cat (catalytic turnover number) prediction models DLKcat and CatPred. We also compared fixed and learnable aggregation of PLM embeddings for k_cat prediction and found that transformer-based learnable aggregation of amino-acid PLM embeddings is generally the most performant. Additionally, we found that ESMC 600M embeddings marginally outperform other PLM embeddings for k_cat prediction. We explored Low-Rank Adaptation (LoRA) masked language model fine-tuning and direct fine-tuning for sequence-to-k_cat mapping, where we found no difference or a drop in performance compared to zero-shot embeddings, respectively. And we investigated the distinct hidden representations in PLM encoders and found that earlier layer embeddings perform comparable to or worse than the final layer. Overall, this study assesses the state of the field for leveraging PLMs for sequence-to-k_cat prediction on a set of diverse ADK orthologs.