Research on Natural Language Misleading Content Detection Method Based on Attention Mechanism
The rapid evolution of digital communication and the corresponding surge in deceptive or misleading content have underscored the critical need for reliable and domain-adaptive detection technologies. Within the scope of the Frontiers in Computer Science, which emphasizes intelligent information syst...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11088074/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The rapid evolution of digital communication and the corresponding surge in deceptive or misleading content have underscored the critical need for reliable and domain-adaptive detection technologies. Within the scope of the Frontiers in Computer Science, which emphasizes intelligent information systems, trustworthy AI, and content safety, this study introduces a robust and generalizable method for detecting misleading content across diverse linguistic and contextual domains. Traditional approaches, typically relying on static feature extraction or simple classifier pipelines, often fail under domain shifts and lack the semantic flexibility required for accurate real-world deployment. These models struggle particularly with ambiguous expressions, weak supervision, and multi-domain variability, resulting in reduced generalization and inconsistent performance. To address these challenges, we propose a novel dual-component system consisting of the Content-Driven Encoder (CoDE) and the Domain-Informed Detection Adapter (DiDA). CoDE employs a dynamic attention mechanism with hierarchical semantic encoding and domain-aware feature modulation, allowing adaptive representation learning across unimodal and multimodal inputs. Complementing this, DiDA injects domain priors via contrastive learning and retrieval, enhancing inference robustness even under severe domain shift. This modular architecture not only supports weakly supervised training but also integrates with large-scale content management pipelines. Comprehensive experiments across heterogeneous datasets–including social media, academic texts, and multimedia transcripts–demonstrate significant improvements over state-of-the-art baselines, particularly in transfer settings. This research contributes to the special issue’s objectives by promoting semantically grounded, scalable, and explainable AI systems that enhance digital content safety and integrity. |
|---|---|
| ISSN: | 2169-3536 |