Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models

Conversational recommender systems (CRSs) have garnered increasing attention for their ability to provide personalized recommendations through natural language interactions. Although large language models (LLMs) have shown potential in recommendation systems owing to their superior language understa...

Full description

Saved in:
Bibliographic Details
Main Authors: Woo-Seok Kim, Seongho Lim, Gun-Woo Kim, Sang-Min Choi
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/2/221
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832588076026691584
author Woo-Seok Kim
Seongho Lim
Gun-Woo Kim
Sang-Min Choi
author_facet Woo-Seok Kim
Seongho Lim
Gun-Woo Kim
Sang-Min Choi
author_sort Woo-Seok Kim
collection DOAJ
description Conversational recommender systems (CRSs) have garnered increasing attention for their ability to provide personalized recommendations through natural language interactions. Although large language models (LLMs) have shown potential in recommendation systems owing to their superior language understanding and reasoning capabilities, extracting and utilizing implicit user preferences from conversations remains a formidable challenge. This paper proposes a method that leverages LLMs to extract implicit preferences and explicitly incorporate them into the recommendation process. Initially, LLMs identify implicit user preferences from conversations, which are then refined into fine-grained numerical values using a BERT-based multi-label classifier to enhance recommendation precision. The proposed approach is validated through experiments on three comprehensive datasets: the Reddit Movie Dataset (8413 dialogues), Inspired (825 dialogues), and ReDial (2311 dialogues). Results show that our approach considerably outperforms traditional CRS methods, achieving a 23.3% improvement in Recall@20 on the ReDial dataset and a 7.2% average improvement in recommendation accuracy across all datasets with GPT-3.5-turbo and GPT-4. These findings highlight the potential of using LLMs to extract and utilize implicit conversational information, effectively enhancing the quality of recommendations in CRSs.
format Article
id doaj-art-988f43a7dbbb45d981a37e6fb2227bde
institution Kabale University
issn 2227-7390
language English
publishDate 2025-01-01
publisher MDPI AG
record_format Article
series Mathematics
spelling doaj-art-988f43a7dbbb45d981a37e6fb2227bde2025-01-24T13:39:47ZengMDPI AGMathematics2227-73902025-01-0113222110.3390/math13020221Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language ModelsWoo-Seok Kim0Seongho Lim1Gun-Woo Kim2Sang-Min Choi3Department of Computer Science and Engineering, Gyeongsang National University, Jinju 52828, Republic of KoreaDigital Division, National Forensic Service, Wonju 26460, Republic of KoreaDepartment of Computer Science and Engineering, Gyeongsang National University, Jinju 52828, Republic of KoreaDepartment of Computer Science and Engineering, Gyeongsang National University, Jinju 52828, Republic of KoreaConversational recommender systems (CRSs) have garnered increasing attention for their ability to provide personalized recommendations through natural language interactions. Although large language models (LLMs) have shown potential in recommendation systems owing to their superior language understanding and reasoning capabilities, extracting and utilizing implicit user preferences from conversations remains a formidable challenge. This paper proposes a method that leverages LLMs to extract implicit preferences and explicitly incorporate them into the recommendation process. Initially, LLMs identify implicit user preferences from conversations, which are then refined into fine-grained numerical values using a BERT-based multi-label classifier to enhance recommendation precision. The proposed approach is validated through experiments on three comprehensive datasets: the Reddit Movie Dataset (8413 dialogues), Inspired (825 dialogues), and ReDial (2311 dialogues). Results show that our approach considerably outperforms traditional CRS methods, achieving a 23.3% improvement in Recall@20 on the ReDial dataset and a 7.2% average improvement in recommendation accuracy across all datasets with GPT-3.5-turbo and GPT-4. These findings highlight the potential of using LLMs to extract and utilize implicit conversational information, effectively enhancing the quality of recommendations in CRSs.https://www.mdpi.com/2227-7390/13/2/221conversational recommender systemslarge language modelsimplicit user preferenceclassification
spellingShingle Woo-Seok Kim
Seongho Lim
Gun-Woo Kim
Sang-Min Choi
Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models
Mathematics
conversational recommender systems
large language models
implicit user preference
classification
title Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models
title_full Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models
title_fullStr Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models
title_full_unstemmed Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models
title_short Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models
title_sort extracting implicit user preferences in conversational recommender systems using large language models
topic conversational recommender systems
large language models
implicit user preference
classification
url https://www.mdpi.com/2227-7390/13/2/221
work_keys_str_mv AT wooseokkim extractingimplicituserpreferencesinconversationalrecommendersystemsusinglargelanguagemodels
AT seongholim extractingimplicituserpreferencesinconversationalrecommendersystemsusinglargelanguagemodels
AT gunwookim extractingimplicituserpreferencesinconversationalrecommendersystemsusinglargelanguagemodels
AT sangminchoi extractingimplicituserpreferencesinconversationalrecommendersystemsusinglargelanguagemodels