A Reinforcement Learning-Based Generative Approach for Event Temporal Relation Extraction
Event temporal relation extraction is a crucial task in natural language processing, aimed at recognizing the temporal relations between event triggers in a text. Despite extensive efforts in this area, the existing methods face two main issues. Firstly, the previous models for event temporal relati...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-03-01
|
| Series: | Entropy |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1099-4300/27/3/284 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849343008580829184 |
|---|---|
| author | Zhonghua Wu Wenzhong Yang Meng Zhang Fuyuan Wei Xinfang Liu |
| author_facet | Zhonghua Wu Wenzhong Yang Meng Zhang Fuyuan Wei Xinfang Liu |
| author_sort | Zhonghua Wu |
| collection | DOAJ |
| description | Event temporal relation extraction is a crucial task in natural language processing, aimed at recognizing the temporal relations between event triggers in a text. Despite extensive efforts in this area, the existing methods face two main issues. Firstly, the previous models for event temporal relation extraction mainly rely on a classification framework, which fails to output the crucial contextual words necessary for predicting the temporal relations between two event triggers. Secondly, the prior research that formulated natural language processing tasks as text generation problems usually trained the generative models by maximum likelihood estimation. However, this approach encounters potential difficulties when the optimization objective is misaligned with the task performance metrics. To resolve these limitations, we introduce a reinforcement learning-based generative framework for event temporal relation extraction. Specifically, to output the important contextual words from the input sentence for temporal relation identification, we introduce dependency path generation as an auxiliary task to complement event temporal relation extraction. This task is solved alongside temporal relation prediction to enhance model performance. To achieve this, we reformulate the event temporal relation extraction task as a text generation problem, aiming to generate both event temporal relation labels and dependency path words based on the input sentence. To bridge the gap between the optimization objective and task performance metrics, we employ the REINFORCE algorithm to optimize our generative model, designing a novel reward function to simultaneously capture the accuracy of temporal prediction and the quality of generation. Lastly, to mitigate the high variance issue encountered when using the REINFORCE algorithm in multi-task generative model training, we propose a baseline policy gradient algorithm to improve the stability and efficiency of the training process. Experimental results on two widely used datasets, MATRES and TB-DENSE, show that our approach exhibits competitive performance. |
| format | Article |
| id | doaj-art-764459e5bd1b48bc8325d56caf6a1c7d |
| institution | Kabale University |
| issn | 1099-4300 |
| language | English |
| publishDate | 2025-03-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Entropy |
| spelling | doaj-art-764459e5bd1b48bc8325d56caf6a1c7d2025-08-20T03:43:11ZengMDPI AGEntropy1099-43002025-03-0127328410.3390/e27030284A Reinforcement Learning-Based Generative Approach for Event Temporal Relation ExtractionZhonghua Wu0Wenzhong Yang1Meng Zhang2Fuyuan Wei3Xinfang Liu4School of Computer Science and Technology, Xinjiang University, Urumqi 830017, ChinaSchool of Computer Science and Technology, Xinjiang University, Urumqi 830017, ChinaSchool of Computer Science and Technology, Xinjiang University, Urumqi 830017, ChinaSchool of Computer Science and Technology, Xinjiang University, Urumqi 830017, ChinaSchool of Computer Science and Technology, Xinjiang University, Urumqi 830017, ChinaEvent temporal relation extraction is a crucial task in natural language processing, aimed at recognizing the temporal relations between event triggers in a text. Despite extensive efforts in this area, the existing methods face two main issues. Firstly, the previous models for event temporal relation extraction mainly rely on a classification framework, which fails to output the crucial contextual words necessary for predicting the temporal relations between two event triggers. Secondly, the prior research that formulated natural language processing tasks as text generation problems usually trained the generative models by maximum likelihood estimation. However, this approach encounters potential difficulties when the optimization objective is misaligned with the task performance metrics. To resolve these limitations, we introduce a reinforcement learning-based generative framework for event temporal relation extraction. Specifically, to output the important contextual words from the input sentence for temporal relation identification, we introduce dependency path generation as an auxiliary task to complement event temporal relation extraction. This task is solved alongside temporal relation prediction to enhance model performance. To achieve this, we reformulate the event temporal relation extraction task as a text generation problem, aiming to generate both event temporal relation labels and dependency path words based on the input sentence. To bridge the gap between the optimization objective and task performance metrics, we employ the REINFORCE algorithm to optimize our generative model, designing a novel reward function to simultaneously capture the accuracy of temporal prediction and the quality of generation. Lastly, to mitigate the high variance issue encountered when using the REINFORCE algorithm in multi-task generative model training, we propose a baseline policy gradient algorithm to improve the stability and efficiency of the training process. Experimental results on two widely used datasets, MATRES and TB-DENSE, show that our approach exhibits competitive performance.https://www.mdpi.com/1099-4300/27/3/284temporal relation extractiongenerative modelsmulti-task learningdependency pathreinforcement learningpolicy gradient method |
| spellingShingle | Zhonghua Wu Wenzhong Yang Meng Zhang Fuyuan Wei Xinfang Liu A Reinforcement Learning-Based Generative Approach for Event Temporal Relation Extraction Entropy temporal relation extraction generative models multi-task learning dependency path reinforcement learning policy gradient method |
| title | A Reinforcement Learning-Based Generative Approach for Event Temporal Relation Extraction |
| title_full | A Reinforcement Learning-Based Generative Approach for Event Temporal Relation Extraction |
| title_fullStr | A Reinforcement Learning-Based Generative Approach for Event Temporal Relation Extraction |
| title_full_unstemmed | A Reinforcement Learning-Based Generative Approach for Event Temporal Relation Extraction |
| title_short | A Reinforcement Learning-Based Generative Approach for Event Temporal Relation Extraction |
| title_sort | reinforcement learning based generative approach for event temporal relation extraction |
| topic | temporal relation extraction generative models multi-task learning dependency path reinforcement learning policy gradient method |
| url | https://www.mdpi.com/1099-4300/27/3/284 |
| work_keys_str_mv | AT zhonghuawu areinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction AT wenzhongyang areinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction AT mengzhang areinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction AT fuyuanwei areinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction AT xinfangliu areinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction AT zhonghuawu reinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction AT wenzhongyang reinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction AT mengzhang reinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction AT fuyuanwei reinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction AT xinfangliu reinforcementlearningbasedgenerativeapproachforeventtemporalrelationextraction |