MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction.

Considering that the traditional deep learning event extraction method ignores the correlation between word features and sequence information, it cannot fully explore the hidden associations between events and events and between events and primary attributes. To solve these problems, we developed a...

Full description

Saved in:
Bibliographic Details
Main Authors: Guangwei Zhang, Fei Xie, Lei Yu
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2024-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0306673
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850049918856593408
author Guangwei Zhang
Fei Xie
Lei Yu
author_facet Guangwei Zhang
Fei Xie
Lei Yu
author_sort Guangwei Zhang
collection DOAJ
description Considering that the traditional deep learning event extraction method ignores the correlation between word features and sequence information, it cannot fully explore the hidden associations between events and events and between events and primary attributes. To solve these problems, we developed a new framework for event extraction called the masked attention-guided dynamic graph aggregation network. On the one hand, to obtain effective word representation and sequence representation, an interaction and complementary relationship are established between word vectors and character vectors. At the same time, a squeeze layer is introduced in the bidirectional independent recurrent unit to model the sentence sequence from both positive and negative directions while retaining the local spatial details to the maximum extent and establishing practical long-term dependencies and rich global context representations. On the other hand, the designed masked attention mechanism can effectively balance the word vector features and sequence semantics and refine these features. The designed dynamic graph aggregation module establishes effective connections between events and events, and between events and essential attributes, strengthens the interactivity and association between them, and realizes feature transfer and aggregation on graph nodes in the neighborhood through dynamic strategies to improve the performance of event extraction. We designed a reconstructed weighted loss function to supervise and adjust each module individually to ensure the optimal feature representation. Finally, the proposed MaskDGNets framework is evaluated on two baseline datasets, DuEE and CCKS2020. It demonstrates its robustness and event extraction performance, with F1 of 81.443% and 87.382%, respectively.
format Article
id doaj-art-8c339e66f7694d32ae09f07bd233dbb9
institution DOAJ
issn 1932-6203
language English
publishDate 2024-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj-art-8c339e66f7694d32ae09f07bd233dbb92025-08-20T02:53:37ZengPublic Library of Science (PLoS)PLoS ONE1932-62032024-01-011911e030667310.1371/journal.pone.0306673MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction.Guangwei ZhangFei XieLei YuConsidering that the traditional deep learning event extraction method ignores the correlation between word features and sequence information, it cannot fully explore the hidden associations between events and events and between events and primary attributes. To solve these problems, we developed a new framework for event extraction called the masked attention-guided dynamic graph aggregation network. On the one hand, to obtain effective word representation and sequence representation, an interaction and complementary relationship are established between word vectors and character vectors. At the same time, a squeeze layer is introduced in the bidirectional independent recurrent unit to model the sentence sequence from both positive and negative directions while retaining the local spatial details to the maximum extent and establishing practical long-term dependencies and rich global context representations. On the other hand, the designed masked attention mechanism can effectively balance the word vector features and sequence semantics and refine these features. The designed dynamic graph aggregation module establishes effective connections between events and events, and between events and essential attributes, strengthens the interactivity and association between them, and realizes feature transfer and aggregation on graph nodes in the neighborhood through dynamic strategies to improve the performance of event extraction. We designed a reconstructed weighted loss function to supervise and adjust each module individually to ensure the optimal feature representation. Finally, the proposed MaskDGNets framework is evaluated on two baseline datasets, DuEE and CCKS2020. It demonstrates its robustness and event extraction performance, with F1 of 81.443% and 87.382%, respectively.https://doi.org/10.1371/journal.pone.0306673
spellingShingle Guangwei Zhang
Fei Xie
Lei Yu
MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction.
PLoS ONE
title MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction.
title_full MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction.
title_fullStr MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction.
title_full_unstemmed MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction.
title_short MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction.
title_sort maskdgnets masked attention guided dynamic graph aggregation network for event extraction
url https://doi.org/10.1371/journal.pone.0306673
work_keys_str_mv AT guangweizhang maskdgnetsmaskedattentionguideddynamicgraphaggregationnetworkforeventextraction
AT feixie maskdgnetsmaskedattentionguideddynamicgraphaggregationnetworkforeventextraction
AT leiyu maskdgnetsmaskedattentionguideddynamicgraphaggregationnetworkforeventextraction