Supervised fine-tuning of pre-trained antibody language models improves antigen specificity prediction.

Antibodies play a crucial role in the adaptive immune response, with their specificity to antigens being a fundamental determinant of immune function. Accurate prediction of antibody-antigen specificity is vital for understanding immune responses, guiding vaccine design, and developing antibody-base...

Full description

Saved in:
Bibliographic Details
Main Authors: Meng Wang, Jonathan Patsenker, Henry Li, Yuval Kluger, Steven H Kleinstein
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-03-01
Series:PLoS Computational Biology
Online Access:https://doi.org/10.1371/journal.pcbi.1012153
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849711323632369664
author Meng Wang
Jonathan Patsenker
Henry Li
Yuval Kluger
Steven H Kleinstein
author_facet Meng Wang
Jonathan Patsenker
Henry Li
Yuval Kluger
Steven H Kleinstein
author_sort Meng Wang
collection DOAJ
description Antibodies play a crucial role in the adaptive immune response, with their specificity to antigens being a fundamental determinant of immune function. Accurate prediction of antibody-antigen specificity is vital for understanding immune responses, guiding vaccine design, and developing antibody-based therapeutics. In this study, we present a method of supervised fine-tuning for antibody language models, which improves on pre-trained antibody language model embeddings in binding specificity prediction to SARS-CoV-2 spike protein and influenza hemagglutinin. We perform supervised fine-tuning on four pre-trained antibody language models to predict specificity to these antigens and demonstrate that fine-tuned language model classifiers exhibit enhanced predictive accuracy compared to classifiers trained on pre-trained model embeddings. Additionally, we investigate the change of model attention activations after supervised fine-tuning to gain insights into the molecular basis of antigen recognition by antibodies. Furthermore, we apply the supervised fine-tuned models to BCR repertoire data related to influenza and SARS-CoV-2 vaccination, demonstrating their ability to capture changes in repertoire following vaccination. Overall, our study highlights the effect of supervised fine-tuning on pre-trained antibody language models as valuable tools to improve antigen specificity prediction.
format Article
id doaj-art-189a7a6eadab4c6bb1ea1653c5463dc7
institution DOAJ
issn 1553-734X
1553-7358
language English
publishDate 2025-03-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS Computational Biology
spelling doaj-art-189a7a6eadab4c6bb1ea1653c5463dc72025-08-20T03:14:39ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582025-03-01213e101215310.1371/journal.pcbi.1012153Supervised fine-tuning of pre-trained antibody language models improves antigen specificity prediction.Meng WangJonathan PatsenkerHenry LiYuval KlugerSteven H KleinsteinAntibodies play a crucial role in the adaptive immune response, with their specificity to antigens being a fundamental determinant of immune function. Accurate prediction of antibody-antigen specificity is vital for understanding immune responses, guiding vaccine design, and developing antibody-based therapeutics. In this study, we present a method of supervised fine-tuning for antibody language models, which improves on pre-trained antibody language model embeddings in binding specificity prediction to SARS-CoV-2 spike protein and influenza hemagglutinin. We perform supervised fine-tuning on four pre-trained antibody language models to predict specificity to these antigens and demonstrate that fine-tuned language model classifiers exhibit enhanced predictive accuracy compared to classifiers trained on pre-trained model embeddings. Additionally, we investigate the change of model attention activations after supervised fine-tuning to gain insights into the molecular basis of antigen recognition by antibodies. Furthermore, we apply the supervised fine-tuned models to BCR repertoire data related to influenza and SARS-CoV-2 vaccination, demonstrating their ability to capture changes in repertoire following vaccination. Overall, our study highlights the effect of supervised fine-tuning on pre-trained antibody language models as valuable tools to improve antigen specificity prediction.https://doi.org/10.1371/journal.pcbi.1012153
spellingShingle Meng Wang
Jonathan Patsenker
Henry Li
Yuval Kluger
Steven H Kleinstein
Supervised fine-tuning of pre-trained antibody language models improves antigen specificity prediction.
PLoS Computational Biology
title Supervised fine-tuning of pre-trained antibody language models improves antigen specificity prediction.
title_full Supervised fine-tuning of pre-trained antibody language models improves antigen specificity prediction.
title_fullStr Supervised fine-tuning of pre-trained antibody language models improves antigen specificity prediction.
title_full_unstemmed Supervised fine-tuning of pre-trained antibody language models improves antigen specificity prediction.
title_short Supervised fine-tuning of pre-trained antibody language models improves antigen specificity prediction.
title_sort supervised fine tuning of pre trained antibody language models improves antigen specificity prediction
url https://doi.org/10.1371/journal.pcbi.1012153
work_keys_str_mv AT mengwang supervisedfinetuningofpretrainedantibodylanguagemodelsimprovesantigenspecificityprediction
AT jonathanpatsenker supervisedfinetuningofpretrainedantibodylanguagemodelsimprovesantigenspecificityprediction
AT henryli supervisedfinetuningofpretrainedantibodylanguagemodelsimprovesantigenspecificityprediction
AT yuvalkluger supervisedfinetuningofpretrainedantibodylanguagemodelsimprovesantigenspecificityprediction
AT stevenhkleinstein supervisedfinetuningofpretrainedantibodylanguagemodelsimprovesantigenspecificityprediction