DEANE: Context-Aware Dual-Craft Graph Contrastive Learning for Enhanced Extractive Question Answering

Abstract Extractive Question Answering (EQA) involves extracting accurate answer spans from a background passage in response to a given question. In recent years, there has been significant interest in leveraging Pre-trained Language Models (PLMs) and Graph Convolutional Networks (GCNs) to address E...

Full description

Saved in:
Bibliographic Details
Main Authors: Dongfen Ye, Jianqiang Zhou, Gang Huang
Format: Article
Language:English
Published: Springer 2025-04-01
Series:International Journal of Computational Intelligence Systems
Subjects:
Online Access:https://doi.org/10.1007/s44196-025-00801-y
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850145254700744704
author Dongfen Ye
Jianqiang Zhou
Gang Huang
author_facet Dongfen Ye
Jianqiang Zhou
Gang Huang
author_sort Dongfen Ye
collection DOAJ
description Abstract Extractive Question Answering (EQA) involves extracting accurate answer spans from a background passage in response to a given question. In recent years, there has been significant interest in leveraging Pre-trained Language Models (PLMs) and Graph Convolutional Networks (GCNs) to address EQA tasks. PLMs usually function as context encoders, while GCNs are employed to capture latent semantic relationships between answer spans and the passage/question. This combined approach has shown promise, yielding notable outcomes in EQA performance. However, current graph-based methods encounter a challenge where the graph structure is predefined without sufficient justification. This graph ambiguity can potentially lead to error propagation within the subsequent graph encoder. To alleviate this issue, this paper introduces Dual-craft basEd grAph coNtrastive lEarning (DEANE) for EQA, where the graph structure and node features are context-aware and data-driven. Initially, the passage and question are represented as a connected graph. Subsequently, the adaptive augmentation strategy is introduced to generate two distinct views of the original graph via reparameterization networks, where important graph edges and node features are prioritized. Finally, a multi-view contrastive loss is leveraged to learn latent representations from augmented graphs. Empirically, our method outperforms existing graph-based approaches on six well-established EQA benchmarks. Ablation studies further demonstrate the effectiveness of the proposed approach in mitigating structural ambiguity, enhancing encoder flexibility, and improving model performance through multi-view data integration.
format Article
id doaj-art-0732c5805bdf452884dd10340b2beedc
institution OA Journals
issn 1875-6883
language English
publishDate 2025-04-01
publisher Springer
record_format Article
series International Journal of Computational Intelligence Systems
spelling doaj-art-0732c5805bdf452884dd10340b2beedc2025-08-20T02:28:08ZengSpringerInternational Journal of Computational Intelligence Systems1875-68832025-04-0118112310.1007/s44196-025-00801-yDEANE: Context-Aware Dual-Craft Graph Contrastive Learning for Enhanced Extractive Question AnsweringDongfen Ye0Jianqiang Zhou1Gang Huang2College of Electrical and Information Engineering, Quzhou UniversityCollege of Electrical and Information Engineering, Quzhou UniversityCollege of Electrical and Information Engineering, Quzhou UniversityAbstract Extractive Question Answering (EQA) involves extracting accurate answer spans from a background passage in response to a given question. In recent years, there has been significant interest in leveraging Pre-trained Language Models (PLMs) and Graph Convolutional Networks (GCNs) to address EQA tasks. PLMs usually function as context encoders, while GCNs are employed to capture latent semantic relationships between answer spans and the passage/question. This combined approach has shown promise, yielding notable outcomes in EQA performance. However, current graph-based methods encounter a challenge where the graph structure is predefined without sufficient justification. This graph ambiguity can potentially lead to error propagation within the subsequent graph encoder. To alleviate this issue, this paper introduces Dual-craft basEd grAph coNtrastive lEarning (DEANE) for EQA, where the graph structure and node features are context-aware and data-driven. Initially, the passage and question are represented as a connected graph. Subsequently, the adaptive augmentation strategy is introduced to generate two distinct views of the original graph via reparameterization networks, where important graph edges and node features are prioritized. Finally, a multi-view contrastive loss is leveraged to learn latent representations from augmented graphs. Empirically, our method outperforms existing graph-based approaches on six well-established EQA benchmarks. Ablation studies further demonstrate the effectiveness of the proposed approach in mitigating structural ambiguity, enhancing encoder flexibility, and improving model performance through multi-view data integration.https://doi.org/10.1007/s44196-025-00801-yExtractive question answeringAdaptive augmentationMulti-view contrastive learningPre-trained language modelGraph convolutional network
spellingShingle Dongfen Ye
Jianqiang Zhou
Gang Huang
DEANE: Context-Aware Dual-Craft Graph Contrastive Learning for Enhanced Extractive Question Answering
International Journal of Computational Intelligence Systems
Extractive question answering
Adaptive augmentation
Multi-view contrastive learning
Pre-trained language model
Graph convolutional network
title DEANE: Context-Aware Dual-Craft Graph Contrastive Learning for Enhanced Extractive Question Answering
title_full DEANE: Context-Aware Dual-Craft Graph Contrastive Learning for Enhanced Extractive Question Answering
title_fullStr DEANE: Context-Aware Dual-Craft Graph Contrastive Learning for Enhanced Extractive Question Answering
title_full_unstemmed DEANE: Context-Aware Dual-Craft Graph Contrastive Learning for Enhanced Extractive Question Answering
title_short DEANE: Context-Aware Dual-Craft Graph Contrastive Learning for Enhanced Extractive Question Answering
title_sort deane context aware dual craft graph contrastive learning for enhanced extractive question answering
topic Extractive question answering
Adaptive augmentation
Multi-view contrastive learning
Pre-trained language model
Graph convolutional network
url https://doi.org/10.1007/s44196-025-00801-y
work_keys_str_mv AT dongfenye deanecontextawaredualcraftgraphcontrastivelearningforenhancedextractivequestionanswering
AT jianqiangzhou deanecontextawaredualcraftgraphcontrastivelearningforenhancedextractivequestionanswering
AT ganghuang deanecontextawaredualcraftgraphcontrastivelearningforenhancedextractivequestionanswering