Evaluating Large Language Models in Code Generation: INFINITE Methodology for Defining the Inference Index
This study introduces a new methodology for an Inference Index (InI) called the Inference Index In Testing Model Effectiveness methodology (INFINITE), aiming to evaluate the performance of Large Language Models (LLMs) in code generation tasks. The InI index provides a comprehensive assessment focusi...
Saved in:
| Main Authors: | Nicholas Christakis, Dimitris Drikakis |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-03-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/7/3784 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
BALI—A Benchmark for Accelerated Language Model Inference
by: Lena Jurkschat, et al.
Published: (2025-01-01) -
Entropy-Guided KV Caching for Efficient LLM Inference
by: Heekyum Kim, et al.
Published: (2025-07-01) -
Matching Game Preferences Through Dialogical Large Language Models: A Perspective
by: Renaud Fabre, et al.
Published: (2025-07-01) -
Performance analysis of advanced deep learning technics: Application to solar energy forecasting and management in several cities in Chad
by: Osée Mounkang, et al.
Published: (2025-01-01) -
The performance of the LSTM-based code generated by Large Language Models (LLMs) in forecasting time series data
by: Saroj Gopali, et al.
Published: (2024-12-01)