Evaluation of Generative AI Models in Python Code Generation: A Comparative Study
This study evaluates leading generative AI models for Python code generation. Evaluation criteria include syntax accuracy, response time, completeness, reliability, and cost. The models tested comprise OpenAI’s GPT series (GPT-4 Turbo, GPT-4o, GPT-4o Mini, GPT-3.5 Turbo), Googleȁ...
Saved in:
| Main Authors: | Dominik Palla, Antonin Slaby |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10963975/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Measuring and Improving the Efficiency of Python Code Generated by LLMs Using CoT Prompting and Fine-Tuning
by: Ramya Jonnala, et al.
Published: (2025-01-01) -
Rich Data Versus Quantity of Data in Code Generation AI: A Paradigm Shift for Healthcare
by: Muthu Ramachandran, et al.
Published: (2025-06-01) -
Beyond Snippet Assistance: A Workflow-Centric Framework for End-to-End AI-Driven Code Generation
by: Vladimir Sonkin, et al.
Published: (2025-03-01) -
Generative AI in cybersecurity: A comprehensive review of LLM applications and vulnerabilities
by: Mohamed Amine Ferrag, et al.
Published: (2025-01-01) -
Quality assurance and validity of AI-generated single best answer questions
by: Ayla Ahmed, et al.
Published: (2025-02-01)