Social Media Text Stance Detection Based on Large Language Models

Stance detection aims to analyze the attitude expressed in a text towards a given target. Social media texts are often short and evolve rapidly, which poses challenges for traditional stance detection methods due to sparse semantic information and inadequate representation of stance features. Additi...

Full description

Saved in:
Bibliographic Details
Main Author: LI Juhao, SHI Lei, DING Meng, LEI Yongsheng, ZHAO Dongyue, CHEN Long
Format: Article
Language:zho
Published: Journal of Computer Engineering and Applications Beijing Co., Ltd., Science Press 2025-05-01
Series:Jisuanji kexue yu tansuo
Subjects:
Online Access:http://fcst.ceaj.org/fileup/1673-9418/PDF/2408074.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Stance detection aims to analyze the attitude expressed in a text towards a given target. Social media texts are often short and evolve rapidly, which poses challenges for traditional stance detection methods due to sparse semantic information and inadequate representation of stance features. Additionally, many existing approaches overlook the role of sentiment information in stance detection. To address these issues, this paper proposes a stance detection method for social media texts leveraging large language models. A specially designed prompt template with explicit task instructions is employed to utilize the model’s pre-trained knowledge related to stance detection, mitigating the challenge of sparse semantic information. Furthermore, sentiment analysis tasks are integrated into the prompt instructions to guide the model’s focus on sentiment information, enriching the semantic cues for stance detection and addressing the underutilization of sentiment data. To enhance the model’s ability to extract and represent stance features, a task-specific adapter is integrated into the model. This improves the representation of stance features and enhances the overall performance of the model in stance detection tasks. Finally, the results from large language models with different architectures are integrated through ensemble voting to improve the stability of prediction results. To validate the method proposed in this paper, comparative experiments are constructed. The experiments conducted on the SemEval-2016 Task 6A dataset demonstrate that the proposed method achieves significantly better performance compared with existing benchmark methods.
ISSN:1673-9418