Behind the mask: Random and selective masking in transformer models applied to specialized social science texts.

Transformer models such as BERT and RoBERTa are increasingly popular in the social sciences to generate data through supervised text classification. These models can be further trained through Masked Language Modeling (MLM) to increase performance in specialized applications. MLM uses a default mask...

Full description

Saved in:
Bibliographic Details
Main Authors: Joan C Timoneda, Sebastián Vallejo Vera
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0318421
Tags: Add Tag
No Tags, Be the first to tag this record!