Exploring people's perceptions of LLM-generated advice

When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals per...

Full description

Saved in:
Bibliographic Details
Main Authors: Joel Wester, Sander de Jong, Henning Pohl, Niels van Berkel
Format: Article
Language:English
Published: Elsevier 2024-08-01
Series:Computers in Human Behavior: Artificial Humans
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S294988212400032X
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1846141715219480576
author Joel Wester
Sander de Jong
Henning Pohl
Niels van Berkel
author_facet Joel Wester
Sander de Jong
Henning Pohl
Niels van Berkel
author_sort Joel Wester
collection DOAJ
description When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (N = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on likelihood, receptiveness, and what advice they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.
format Article
id doaj-art-f486e1fe70e94333bbbd8c39153043fc
institution Kabale University
issn 2949-8821
language English
publishDate 2024-08-01
publisher Elsevier
record_format Article
series Computers in Human Behavior: Artificial Humans
spelling doaj-art-f486e1fe70e94333bbbd8c39153043fc2024-12-04T05:14:59ZengElsevierComputers in Human Behavior: Artificial Humans2949-88212024-08-0122100072Exploring people's perceptions of LLM-generated adviceJoel Wester0Sander de Jong1Henning Pohl2Niels van Berkel3Corresponding author.; Aalborg University, Aalborg, DenmarkAalborg University, Aalborg, DenmarkAalborg University, Aalborg, DenmarkAalborg University, Aalborg, DenmarkWhen searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (N = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on likelihood, receptiveness, and what advice they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.http://www.sciencedirect.com/science/article/pii/S294988212400032XLarge language modelsLLMGenerative AIAdviceUser characteristics
spellingShingle Joel Wester
Sander de Jong
Henning Pohl
Niels van Berkel
Exploring people's perceptions of LLM-generated advice
Computers in Human Behavior: Artificial Humans
Large language models
LLM
Generative AI
Advice
User characteristics
title Exploring people's perceptions of LLM-generated advice
title_full Exploring people's perceptions of LLM-generated advice
title_fullStr Exploring people's perceptions of LLM-generated advice
title_full_unstemmed Exploring people's perceptions of LLM-generated advice
title_short Exploring people's perceptions of LLM-generated advice
title_sort exploring people s perceptions of llm generated advice
topic Large language models
LLM
Generative AI
Advice
User characteristics
url http://www.sciencedirect.com/science/article/pii/S294988212400032X
work_keys_str_mv AT joelwester exploringpeoplesperceptionsofllmgeneratedadvice
AT sanderdejong exploringpeoplesperceptionsofllmgeneratedadvice
AT henningpohl exploringpeoplesperceptionsofllmgeneratedadvice
AT nielsvanberkel exploringpeoplesperceptionsofllmgeneratedadvice