Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation
Abstract In the real world, stance detection tasks often involve assessing the stance or attitude of a given text toward new, unseen targets, a task known as zero-shot stance detection. However, zero-shot stance detection often suffers from issues such as sparse data annotation and inherent task com...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2025-01-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-024-01767-8 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823861501624057856 |
---|---|
author | Qinlong Fan Jicang Lu Yepeng Sun Qiankun Pi Shouxin Shang |
author_facet | Qinlong Fan Jicang Lu Yepeng Sun Qiankun Pi Shouxin Shang |
author_sort | Qinlong Fan |
collection | DOAJ |
description | Abstract In the real world, stance detection tasks often involve assessing the stance or attitude of a given text toward new, unseen targets, a task known as zero-shot stance detection. However, zero-shot stance detection often suffers from issues such as sparse data annotation and inherent task complexity, which can lead to lower performance. To address these challenges, we propose combining fine-tuning of Large Language Models (LLMs) with knowledge augmentation for zero-shot stance detection. Specifically, we leverage stance detection and related tasks from debate corpora to perform multi-task fine-tuning of LLMs. This approach aims to learn and transfer the capability of zero-shot stance detection and reasoning analysis from relevant data. Additionally, we enhance the model’s semantic understanding of the given text and targets by retrieving relevant knowledge from external knowledge bases as context, alleviating the lack of relevant contextual knowledge. Compared to ChatGPT, our model achieves a significant improvement in the average F1 score, with an increase of 15.74% on the SemEval 2016 Task 6 A and 3.55% on the P-Stance dataset. Our model outperforms current state-of-the-art models on these two datasets, demonstrating the superiority of multi-task fine-tuning with debate data and knowledge augmentation. |
format | Article |
id | doaj-art-c6b6b076567e471694d4af16106c5091 |
institution | Kabale University |
issn | 2199-4536 2198-6053 |
language | English |
publishDate | 2025-01-01 |
publisher | Springer |
record_format | Article |
series | Complex & Intelligent Systems |
spelling | doaj-art-c6b6b076567e471694d4af16106c50912025-02-09T13:01:13ZengSpringerComplex & Intelligent Systems2199-45362198-60532025-01-0111211210.1007/s40747-024-01767-8Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentationQinlong Fan0Jicang Lu1Yepeng Sun2Qiankun Pi3Shouxin Shang4State Key Laboratory of Mathematical Engineering and Advanced ComputingState Key Laboratory of Mathematical Engineering and Advanced ComputingState Key Laboratory of Mathematical Engineering and Advanced ComputingState Key Laboratory of Mathematical Engineering and Advanced ComputingState Key Laboratory of Mathematical Engineering and Advanced ComputingAbstract In the real world, stance detection tasks often involve assessing the stance or attitude of a given text toward new, unseen targets, a task known as zero-shot stance detection. However, zero-shot stance detection often suffers from issues such as sparse data annotation and inherent task complexity, which can lead to lower performance. To address these challenges, we propose combining fine-tuning of Large Language Models (LLMs) with knowledge augmentation for zero-shot stance detection. Specifically, we leverage stance detection and related tasks from debate corpora to perform multi-task fine-tuning of LLMs. This approach aims to learn and transfer the capability of zero-shot stance detection and reasoning analysis from relevant data. Additionally, we enhance the model’s semantic understanding of the given text and targets by retrieving relevant knowledge from external knowledge bases as context, alleviating the lack of relevant contextual knowledge. Compared to ChatGPT, our model achieves a significant improvement in the average F1 score, with an increase of 15.74% on the SemEval 2016 Task 6 A and 3.55% on the P-Stance dataset. Our model outperforms current state-of-the-art models on these two datasets, demonstrating the superiority of multi-task fine-tuning with debate data and knowledge augmentation.https://doi.org/10.1007/s40747-024-01767-8Zero-shot stance detectionLLMsDebate corpus dataMulti-task fine-tuningKnowledge augmentation |
spellingShingle | Qinlong Fan Jicang Lu Yepeng Sun Qiankun Pi Shouxin Shang Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation Complex & Intelligent Systems Zero-shot stance detection LLMs Debate corpus data Multi-task fine-tuning Knowledge augmentation |
title | Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation |
title_full | Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation |
title_fullStr | Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation |
title_full_unstemmed | Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation |
title_short | Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation |
title_sort | enhancing zero shot stance detection via multi task fine tuning with debate data and knowledge augmentation |
topic | Zero-shot stance detection LLMs Debate corpus data Multi-task fine-tuning Knowledge augmentation |
url | https://doi.org/10.1007/s40747-024-01767-8 |
work_keys_str_mv | AT qinlongfan enhancingzeroshotstancedetectionviamultitaskfinetuningwithdebatedataandknowledgeaugmentation AT jicanglu enhancingzeroshotstancedetectionviamultitaskfinetuningwithdebatedataandknowledgeaugmentation AT yepengsun enhancingzeroshotstancedetectionviamultitaskfinetuningwithdebatedataandknowledgeaugmentation AT qiankunpi enhancingzeroshotstancedetectionviamultitaskfinetuningwithdebatedataandknowledgeaugmentation AT shouxinshang enhancingzeroshotstancedetectionviamultitaskfinetuningwithdebatedataandknowledgeaugmentation |