Emotional prompting amplifies disinformation generation in AI large language models

IntroductionThe emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. While these developments offer significant opportunities for improving communication, such as in health-...

Full description

Saved in:
Bibliographic Details
Main Authors: Rasita Vinay, Giovanni Spitale, Nikola Biller-Andorno, Federico Germani
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-04-01
Series:Frontiers in Artificial Intelligence
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/frai.2025.1543603/full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849687748311515136
author Rasita Vinay
Rasita Vinay
Giovanni Spitale
Nikola Biller-Andorno
Federico Germani
author_facet Rasita Vinay
Rasita Vinay
Giovanni Spitale
Nikola Biller-Andorno
Federico Germani
author_sort Rasita Vinay
collection DOAJ
description IntroductionThe emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. While these developments offer significant opportunities for improving communication, such as in health-related crisis communication, they also pose substantial risks by facilitating the creation of convincing fake news and disinformation. The widespread dissemination of AI-generated disinformation adds complexity to the existing challenges of the ongoing infodemic, significantly affecting public health and the stability of democratic institutions.RationalePrompt engineering is a technique that involves the creation of specific queries given to LLMs. It has emerged as a strategy to guide LLMs in generating the desired outputs. Recent research shows that the output of LLMs depends on emotional framing within prompts, suggesting that incorporating emotional cues into prompts could influence their response behavior. In this study, we investigated how the politeness or impoliteness of prompts affects the frequency of disinformation generation by various LLMs.ResultsWe generated and evaluated a corpus of 19,800 social media posts on public health topics to assess the disinformation generation capabilities of OpenAI’s LLMs, including davinci-002, davinci-003, gpt-3.5-turbo, and gpt-4. Our findings revealed that all LLMs efficiently generated disinformation (davinci-002, 67%; davinci-003, 86%; gpt-3.5-turbo, 77%; and gpt-4, 99%). Introducing polite language to prompt requests yielded significantly higher success rates for disinformation (davinci-002, 79%; davinci-003, 90%; gpt-3.5-turbo, 94%; and gpt-4, 100%). Impolite prompting resulted in a significant decrease in disinformation production across all models (davinci-002, 59%; davinci-003, 44%; and gpt-3.5-turbo, 28%) and a slight reduction for gpt-4 (94%).ConclusionOur study reveals that all tested LLMs effectively generate disinformation. Notably, emotional prompting had a significant impact on disinformation production rates, with models showing higher success rates when prompted with polite language compared to neutral or impolite requests. Our investigation highlights that LLMs can be exploited to create disinformation and emphasizes the critical need for ethics-by-design approaches in developing AI technologies. We maintain that identifying ways to mitigate the exploitation of LLMs through emotional prompting is crucial to prevent their misuse for purposes detrimental to public health and society.
format Article
id doaj-art-6f9ff9a581dd450985fa93bb1db98909
institution DOAJ
issn 2624-8212
language English
publishDate 2025-04-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Artificial Intelligence
spelling doaj-art-6f9ff9a581dd450985fa93bb1db989092025-08-20T03:22:15ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122025-04-01810.3389/frai.2025.15436031543603Emotional prompting amplifies disinformation generation in AI large language modelsRasita Vinay0Rasita Vinay1Giovanni Spitale2Nikola Biller-Andorno3Federico Germani4Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, SwitzerlandSchool of Medicine, University of St. Gallen, St. Gallen, SwitzerlandInstitute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, SwitzerlandInstitute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, SwitzerlandInstitute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, SwitzerlandIntroductionThe emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. While these developments offer significant opportunities for improving communication, such as in health-related crisis communication, they also pose substantial risks by facilitating the creation of convincing fake news and disinformation. The widespread dissemination of AI-generated disinformation adds complexity to the existing challenges of the ongoing infodemic, significantly affecting public health and the stability of democratic institutions.RationalePrompt engineering is a technique that involves the creation of specific queries given to LLMs. It has emerged as a strategy to guide LLMs in generating the desired outputs. Recent research shows that the output of LLMs depends on emotional framing within prompts, suggesting that incorporating emotional cues into prompts could influence their response behavior. In this study, we investigated how the politeness or impoliteness of prompts affects the frequency of disinformation generation by various LLMs.ResultsWe generated and evaluated a corpus of 19,800 social media posts on public health topics to assess the disinformation generation capabilities of OpenAI’s LLMs, including davinci-002, davinci-003, gpt-3.5-turbo, and gpt-4. Our findings revealed that all LLMs efficiently generated disinformation (davinci-002, 67%; davinci-003, 86%; gpt-3.5-turbo, 77%; and gpt-4, 99%). Introducing polite language to prompt requests yielded significantly higher success rates for disinformation (davinci-002, 79%; davinci-003, 90%; gpt-3.5-turbo, 94%; and gpt-4, 100%). Impolite prompting resulted in a significant decrease in disinformation production across all models (davinci-002, 59%; davinci-003, 44%; and gpt-3.5-turbo, 28%) and a slight reduction for gpt-4 (94%).ConclusionOur study reveals that all tested LLMs effectively generate disinformation. Notably, emotional prompting had a significant impact on disinformation production rates, with models showing higher success rates when prompted with polite language compared to neutral or impolite requests. Our investigation highlights that LLMs can be exploited to create disinformation and emphasizes the critical need for ethics-by-design approaches in developing AI technologies. We maintain that identifying ways to mitigate the exploitation of LLMs through emotional prompting is crucial to prevent their misuse for purposes detrimental to public health and society.https://www.frontiersin.org/articles/10.3389/frai.2025.1543603/fullAILLMdisinformationmisinformationinfodemicemotional prompting
spellingShingle Rasita Vinay
Rasita Vinay
Giovanni Spitale
Nikola Biller-Andorno
Federico Germani
Emotional prompting amplifies disinformation generation in AI large language models
Frontiers in Artificial Intelligence
AI
LLM
disinformation
misinformation
infodemic
emotional prompting
title Emotional prompting amplifies disinformation generation in AI large language models
title_full Emotional prompting amplifies disinformation generation in AI large language models
title_fullStr Emotional prompting amplifies disinformation generation in AI large language models
title_full_unstemmed Emotional prompting amplifies disinformation generation in AI large language models
title_short Emotional prompting amplifies disinformation generation in AI large language models
title_sort emotional prompting amplifies disinformation generation in ai large language models
topic AI
LLM
disinformation
misinformation
infodemic
emotional prompting
url https://www.frontiersin.org/articles/10.3389/frai.2025.1543603/full
work_keys_str_mv AT rasitavinay emotionalpromptingamplifiesdisinformationgenerationinailargelanguagemodels
AT rasitavinay emotionalpromptingamplifiesdisinformationgenerationinailargelanguagemodels
AT giovannispitale emotionalpromptingamplifiesdisinformationgenerationinailargelanguagemodels
AT nikolabillerandorno emotionalpromptingamplifiesdisinformationgenerationinailargelanguagemodels
AT federicogermani emotionalpromptingamplifiesdisinformationgenerationinailargelanguagemodels