Reducing Artificial Intelligence Costs in Business through Prompt Optimization
This study investigates the optimization of token consumption in large language models (LLMs) through prompt engineering, specifically comparing full-sentence prompts with keyword-based alternatives. Analyzing data from multiple LLM providers across four task types (Question-Answer, Duty, Summary,...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IJMADA
2025-05-01
|
| Series: | International Journal of Management and Data Analytics |
| Subjects: | |
| Online Access: | https://ijmada.com/index.php/ijmada/article/view/81 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850198563194142720 |
|---|---|
| author | Emre Akadal |
| author_facet | Emre Akadal |
| author_sort | Emre Akadal |
| collection | DOAJ |
| description |
This study investigates the optimization of token consumption in large language models (LLMs) through prompt engineering, specifically comparing full-sentence prompts with keyword-based alternatives. Analyzing data from multiple LLM providers across four task types (Question-Answer, Duty, Summary, and Creativity), the research examined token usage patterns and response quality metrics. The study utilized a comprehensive dataset (N=1,231) and employed various evaluation methods, including BERTScore, ROUGE-L, and perplexity analysis. Results demonstrated significant token savings with keyword-based prompts (reduction in cost of 16,7%) while maintaining comparable response quality. Analysis revealed task-specific variations in performance, with duty-related tasks showing no significant quality degradation, while question-answering and summary tasks exhibited minimal quality differences. The findings suggest that keyword-based prompting offers a viable cost optimization strategy for businesses implementing LLM solutions, particularly in duty-related applications. Statistical analysis confirmed significant differences in token consumption (p < .001) with substantial effect sizes, while quality metrics showed only marginal decreases in semantic similarity (ΔBERTScore = -0.005) and surface-level similarity (ΔROUGE-L = -0.019). This research provides practical insights for organizations seeking to optimize their LLM implementation costs while maintaining response quality.
|
| format | Article |
| id | doaj-art-343aeb3f3f4f4a92b55e48d231eead53 |
| institution | OA Journals |
| issn | 2816-9395 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | IJMADA |
| record_format | Article |
| series | International Journal of Management and Data Analytics |
| spelling | doaj-art-343aeb3f3f4f4a92b55e48d231eead532025-08-20T02:12:50ZengIJMADAInternational Journal of Management and Data Analytics2816-93952025-05-0151Reducing Artificial Intelligence Costs in Business through Prompt OptimizationEmre Akadal0Istanbul University This study investigates the optimization of token consumption in large language models (LLMs) through prompt engineering, specifically comparing full-sentence prompts with keyword-based alternatives. Analyzing data from multiple LLM providers across four task types (Question-Answer, Duty, Summary, and Creativity), the research examined token usage patterns and response quality metrics. The study utilized a comprehensive dataset (N=1,231) and employed various evaluation methods, including BERTScore, ROUGE-L, and perplexity analysis. Results demonstrated significant token savings with keyword-based prompts (reduction in cost of 16,7%) while maintaining comparable response quality. Analysis revealed task-specific variations in performance, with duty-related tasks showing no significant quality degradation, while question-answering and summary tasks exhibited minimal quality differences. The findings suggest that keyword-based prompting offers a viable cost optimization strategy for businesses implementing LLM solutions, particularly in duty-related applications. Statistical analysis confirmed significant differences in token consumption (p < .001) with substantial effect sizes, while quality metrics showed only marginal decreases in semantic similarity (ΔBERTScore = -0.005) and surface-level similarity (ΔROUGE-L = -0.019). This research provides practical insights for organizations seeking to optimize their LLM implementation costs while maintaining response quality. https://ijmada.com/index.php/ijmada/article/view/81Generative Artificial IntelligenceCost OptimizationPrompt Engineering |
| spellingShingle | Emre Akadal Reducing Artificial Intelligence Costs in Business through Prompt Optimization International Journal of Management and Data Analytics Generative Artificial Intelligence Cost Optimization Prompt Engineering |
| title | Reducing Artificial Intelligence Costs in Business through Prompt Optimization |
| title_full | Reducing Artificial Intelligence Costs in Business through Prompt Optimization |
| title_fullStr | Reducing Artificial Intelligence Costs in Business through Prompt Optimization |
| title_full_unstemmed | Reducing Artificial Intelligence Costs in Business through Prompt Optimization |
| title_short | Reducing Artificial Intelligence Costs in Business through Prompt Optimization |
| title_sort | reducing artificial intelligence costs in business through prompt optimization |
| topic | Generative Artificial Intelligence Cost Optimization Prompt Engineering |
| url | https://ijmada.com/index.php/ijmada/article/view/81 |
| work_keys_str_mv | AT emreakadal reducingartificialintelligencecostsinbusinessthroughpromptoptimization |