Cue prompt adapting model for relation extraction

Prompt-tuning models output relation types as verbalised-type tokens instead of predicting the confidence scores for each relation type. However, existing prompt-tuning models cannot perceive named entities of a relation instance because they are normally implemented on raw input that is too weak to...

Full description

Saved in:
Bibliographic Details
Main Authors: Kai Wang, Yanping Chen, Kunjian Wen, Chao Wei, Bo Dong, Qinghua Zheng, Yongbin Qin
Format: Article
Language:English
Published: Taylor & Francis Group 2023-12-01
Series:Connection Science
Subjects:
Online Access:http://dx.doi.org/10.1080/09540091.2022.2161478
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Prompt-tuning models output relation types as verbalised-type tokens instead of predicting the confidence scores for each relation type. However, existing prompt-tuning models cannot perceive named entities of a relation instance because they are normally implemented on raw input that is too weak to encode the contextual features and semantic dependencies of a relation instance. This study proposes a cue prompt adapting (CPA) model for relation extraction (RE) that encodes contextual features and semantic dependencies by implanting task-relevant cues in a sentence. Additionally, a new transformer architecture is proposed to adapt pre-trained language models (PLMs) to perceive named entities in a relation instance. Finally, in the decoding process, a goal-oriented prompt template is designed to take advantage of the potential semantic features of a PLM. The proposed model is evaluated using three public corpora: ACE, ReTACRED, and Semeval. The performance achieves an impressive improvement, outperforming existing state-of-the-art models. Experiments indicate that the proposed model is effective for learning task-specific contextual features and semantic dependencies in a relation instance.
ISSN:0954-0091
1360-0494