Evaluating Large Language Models in Code Generation: INFINITE Methodology for Defining the Inference Index

This study introduces a new methodology for an Inference Index (InI) called the Inference Index In Testing Model Effectiveness methodology (INFINITE), aiming to evaluate the performance of Large Language Models (LLMs) in code generation tasks. The InI index provides a comprehensive assessment focusi...

Full description

Saved in:
Bibliographic Details
Main Authors: Nicholas Christakis, Dimitris Drikakis
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/7/3784
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study introduces a new methodology for an Inference Index (InI) called the Inference Index In Testing Model Effectiveness methodology (INFINITE), aiming to evaluate the performance of Large Language Models (LLMs) in code generation tasks. The InI index provides a comprehensive assessment focusing on three key components: efficiency, consistency, and accuracy. This approach encapsulates time-based efficiency, response quality, and the stability of model outputs, offering a thorough understanding of LLM performance beyond traditional accuracy metrics. We apply this methodology to compare OpenAI’s GPT-4o (GPT), OpenAI-o1 pro (OAI1), and OpenAI-o3 mini-high (OAI3) in generating Python code for two tasks: a data-cleaning and statistical computation task and a Long Short-Term Memory (LSTM) model generation task for forecasting meteorological variables such as temperature, relative humidity, and wind speed. Our findings demonstrate that GPT outperforms OAI1 and performs comparably to OAI3 regarding accuracy and workflow efficiency. The study reveals that LLM-assisted code generation can produce results similar to expert-designed models with effective prompting and refinement. GPT’s performance advantage highlights the benefits of widespread use and user feedback. These findings contribute to advancing AI-assisted software development, providing a structured approach for evaluating LLMs in coding tasks and setting the groundwork for future studies on broader model comparisons and expanded assessment frameworks.
ISSN:2076-3417