Post-translational modification prediction via prompt-based fine-tuning of a GPT-2 model

  • Palistha Shrestha
  • , Jeevan Kandel
  • , Hilal Tayara*
  • , Kil To Chong*
  • *Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Post-translational modifications (PTMs) are pivotal in modulating protein functions and influencing cellular processes like signaling, localization, and degradation. The complexity of these biological interactions necessitates efficient predictive methodologies. In this work, we introduce PTMGPT2, an interpretable protein language model that utilizes prompt-based fine-tuning to improve its accuracy in precisely predicting PTMs. Drawing inspiration from recent advancements in GPT-based architectures, PTMGPT2 adopts unsupervised learning to identify PTMs. It utilizes a custom prompt to guide the model through the subtle linguistic patterns encoded in amino acid sequences, generating tokens indicative of PTM sites. To provide interpretability, we visualize attention profiles from the model’s final decoder layer to elucidate sequence motifs essential for molecular recognition and analyze the effects of mutations at or near PTM sites to offer deeper insights into protein functionality. Comparative assessments reveal that PTMGPT2 outperforms existing methods across 19 PTM types, underscoring its potential in identifying disease associations and drug targets.

Original languageEnglish
Article number6699
JournalNature Communications
Volume15
Issue number1
DOIs
StatePublished - 2024.12

Quacquarelli Symonds(QS) Subject Topics

  • Chemistry
  • Physics & Astronomy
  • Biological Sciences

Fingerprint

Dive into the research topics of 'Post-translational modification prediction via prompt-based fine-tuning of a GPT-2 model'. Together they form a unique fingerprint.

Cite this