A Multi-Criteria Comparison of Large Language Model Powered Assistants in Pre-Research Studies for the Academia

Large Language Models (LLMs), including Generative Pre-trained Transformers (GPT), a specific type of Large Language Model Powered Assistants (LLM-PA), have emerged as powerful tools in academic research and education. They offer capabilities ranging from language understanding to content generation...

Full description

Saved in:
Bibliographic Details
Main Authors: Murat Akin, Gul Didem Batur Sir, Ayyuce Aydemir Karadag, Hakan Cercioglu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11072164/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849417148988915712
author Murat Akin
Gul Didem Batur Sir
Ayyuce Aydemir Karadag
Hakan Cercioglu
author_facet Murat Akin
Gul Didem Batur Sir
Ayyuce Aydemir Karadag
Hakan Cercioglu
author_sort Murat Akin
collection DOAJ
description Large Language Models (LLMs), including Generative Pre-trained Transformers (GPT), a specific type of Large Language Model Powered Assistants (LLM-PA), have emerged as powerful tools in academic research and education. They offer capabilities ranging from language understanding to content generation, and serve as the foundation for LLM-PA-powered assistants, such as ChatGPT, DeepSeek, and Gemini, which facilitate interactive learning, research support, and intelligent tutoring. This study aims to guide researchers in choosing and ranking various LLM-PA alternatives in their preliminary research for academic studies. However, selecting the appropriate alternatives requires considering a large number of distinct criteria. Therefore, we conducted a multi-criteria comparison of different LLM-PAs employed in academic research. These assistants are evaluated based on criteria including performance metrics, user experience, ethical issues, and technical constraints. Examining the strengths and limitations of each tool across these dimensions, it is aimed to provide insights into their performance and suitability for academic applications. Throughout the solution procedure, we first define the criteria and sub-criteria affecting the preferences and sort them by the G1 method. Subsequently, we evaluate nine commonly used LLM-PA using the Simple Additive Weighting Method. According to the results, Gemini 2.0, Claude 3.7 Sonnet and ChatGPT-4o are the most preferred tools.
format Article
id doaj-art-e8a63ccfb797468e8fd80d8b6e765fd7
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-e8a63ccfb797468e8fd80d8b6e765fd72025-08-20T03:32:55ZengIEEEIEEE Access2169-35362025-01-011312708612709910.1109/ACCESS.2025.358650211072164A Multi-Criteria Comparison of Large Language Model Powered Assistants in Pre-Research Studies for the AcademiaMurat Akin0https://orcid.org/0000-0003-0001-1036Gul Didem Batur Sir1https://orcid.org/0000-0002-5226-2964Ayyuce Aydemir Karadag2https://orcid.org/0000-0001-7586-5648Hakan Cercioglu3https://orcid.org/0000-0002-6271-6448TUSAŞ-Kazan Vocational School, Gazi University, Ankara, TürkiyeDepartment of Industrial Engineering, Faculty of Engineering, Gazi University, Ankara, TürkiyeDepartment of Industrial Engineering, Faculty of Engineering, Gazi University, Ankara, TürkiyeDepartment of Industrial Engineering, Faculty of Engineering, Gazi University, Ankara, TürkiyeLarge Language Models (LLMs), including Generative Pre-trained Transformers (GPT), a specific type of Large Language Model Powered Assistants (LLM-PA), have emerged as powerful tools in academic research and education. They offer capabilities ranging from language understanding to content generation, and serve as the foundation for LLM-PA-powered assistants, such as ChatGPT, DeepSeek, and Gemini, which facilitate interactive learning, research support, and intelligent tutoring. This study aims to guide researchers in choosing and ranking various LLM-PA alternatives in their preliminary research for academic studies. However, selecting the appropriate alternatives requires considering a large number of distinct criteria. Therefore, we conducted a multi-criteria comparison of different LLM-PAs employed in academic research. These assistants are evaluated based on criteria including performance metrics, user experience, ethical issues, and technical constraints. Examining the strengths and limitations of each tool across these dimensions, it is aimed to provide insights into their performance and suitability for academic applications. Throughout the solution procedure, we first define the criteria and sub-criteria affecting the preferences and sort them by the G1 method. Subsequently, we evaluate nine commonly used LLM-PA using the Simple Additive Weighting Method. According to the results, Gemini 2.0, Claude 3.7 Sonnet and ChatGPT-4o are the most preferred tools.https://ieeexplore.ieee.org/document/11072164/LLM-powered assistantsacademic studiesmulti-criteria decision-makingsimple additive weighting method
spellingShingle Murat Akin
Gul Didem Batur Sir
Ayyuce Aydemir Karadag
Hakan Cercioglu
A Multi-Criteria Comparison of Large Language Model Powered Assistants in Pre-Research Studies for the Academia
IEEE Access
LLM-powered assistants
academic studies
multi-criteria decision-making
simple additive weighting method
title A Multi-Criteria Comparison of Large Language Model Powered Assistants in Pre-Research Studies for the Academia
title_full A Multi-Criteria Comparison of Large Language Model Powered Assistants in Pre-Research Studies for the Academia
title_fullStr A Multi-Criteria Comparison of Large Language Model Powered Assistants in Pre-Research Studies for the Academia
title_full_unstemmed A Multi-Criteria Comparison of Large Language Model Powered Assistants in Pre-Research Studies for the Academia
title_short A Multi-Criteria Comparison of Large Language Model Powered Assistants in Pre-Research Studies for the Academia
title_sort multi criteria comparison of large language model powered assistants in pre research studies for the academia
topic LLM-powered assistants
academic studies
multi-criteria decision-making
simple additive weighting method
url https://ieeexplore.ieee.org/document/11072164/
work_keys_str_mv AT muratakin amulticriteriacomparisonoflargelanguagemodelpoweredassistantsinpreresearchstudiesfortheacademia
AT guldidembatursir amulticriteriacomparisonoflargelanguagemodelpoweredassistantsinpreresearchstudiesfortheacademia
AT ayyuceaydemirkaradag amulticriteriacomparisonoflargelanguagemodelpoweredassistantsinpreresearchstudiesfortheacademia
AT hakancercioglu amulticriteriacomparisonoflargelanguagemodelpoweredassistantsinpreresearchstudiesfortheacademia
AT muratakin multicriteriacomparisonoflargelanguagemodelpoweredassistantsinpreresearchstudiesfortheacademia
AT guldidembatursir multicriteriacomparisonoflargelanguagemodelpoweredassistantsinpreresearchstudiesfortheacademia
AT ayyuceaydemirkaradag multicriteriacomparisonoflargelanguagemodelpoweredassistantsinpreresearchstudiesfortheacademia
AT hakancercioglu multicriteriacomparisonoflargelanguagemodelpoweredassistantsinpreresearchstudiesfortheacademia