Evaluating Coding Proficiency of Large Language Models: An Investigation Through Machine Learning Problems
Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains, but their effectiveness in coding workflows, particularly in machine learning (ML), requires deeper evaluation. This paper investigates the coding proficiency of LLMs such as GPT and Gemini by benchmarking...
Saved in:
| Main Authors: | Eunbi Ko, Pilsung Kang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10937484/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Can large language models generate geospatial code?
by: Shuyang Hou, et al.
Published: (2025-08-01) -
An automated information extraction model for unstructured discharge letters using large language models and GPT-4
by: Robert M. Siepmann, et al.
Published: (2025-06-01) -
Using Large Language Models for Aerospace Code Generation: Methods, Benchmarks, and Potential Values
by: Rui He, et al.
Published: (2025-05-01) -
Bridging neuroscience and AI: a survey on large language models for neurological signal interpretation
by: Sreejith Chandrasekharan, et al.
Published: (2025-06-01) -
MAGECODE: Machine-Generated Code Detection Method Using Large Language Models
by: Hung Pham, et al.
Published: (2024-01-01)