A Centrality-Weighted Bidirectional Encoder Representation from Transformers Model for Enhanced Sequence Labeling in Key Phrase Extraction from Scientific Texts

Deep learning approaches, utilizing Bidirectional Encoder Representation from Transformers (BERT) and advanced fine-tuning techniques, have achieved state-of-the-art accuracies in the domain of term extraction from texts. However, BERT presents some limitations in that it primarily captures the sema...

Full description

Saved in:
Bibliographic Details
Main Authors: Tsitsi Zengeya, Jean Vincent Fonou Dombeu, Mandlenkosi Gwetu
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Big Data and Cognitive Computing
Subjects:
Online Access:https://www.mdpi.com/2504-2289/8/12/182
Tags: Add Tag
No Tags, Be the first to tag this record!