Countering AI-generated misinformation with pre-emptive source discreditation and debunking
Despite widespread concerns over AI-generated misinformation, its impact on people’s reasoning and the effectiveness of countermeasures remain unclear. This study examined whether a pre-emptive, source-focused inoculation—designed to lower trust in AI-generated information—could reduce its influence...
Saved in:
| Main Authors: | Emily R. Spearing, Constantina I. Gile, Amy L. Fogwill, Toby Prike, Briony Swire-Thompson, Stephan Lewandowsky, Ullrich K. H. Ecker |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
The Royal Society
2025-06-01
|
| Series: | Royal Society Open Science |
| Subjects: | |
| Online Access: | https://royalsocietypublishing.org/doi/10.1098/rsos.242148 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Specific media literacy tips improve AI-generated visual misinformation discernment
by: Sean Guo, et al.
Published: (2025-07-01) -
Debunking misinformation about abortion-related maternal mortality in Africa
by: Lynn M. Morgan, et al.
Published: (2025-12-01) -
Augmenting Multimodal Content Representation with Transformers for Misinformation Detection
by: Jenq-Haur Wang, et al.
Published: (2024-10-01) -
The media literacy dilemma: can ChatGPT facilitate the discernment of online health misinformation?
by: Wei Peng, et al.
Published: (2024-11-01) -
Navigating online health information: empowerment vs. misinformation
by: Alan Silburn
Published: (2025-07-01)