Efficient Prompt Optimization for Relevance Evaluation via LLM-Based Confusion Matrix Feedback
Evaluating query-passage relevance is a crucial task in information retrieval (IR), where the performance of large language models (LLMs) greatly depends on the quality of prompts. Current prompt optimization methods typically require multiple candidate generations or iterative refinements, resultin...
Saved in:
| Main Author: | Jaekeol Choi |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/9/5198 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Confusion Matrices: A Unified Theory
by: Johan Erbani, et al.
Published: (2024-01-01) -
Evaluating the capability of large language models in characterising relational feedback: A comparative analysis of prompting strategies
by: Wei Dai, et al.
Published: (2025-06-01) -
A Comparative Analysis of Machine Learning Algorithms for Classification of Diabetes Utilizing Confusion Matrix Analysis
by: Maad M. Mijwil, et al.
Published: (2024-05-01) -
The Promises and Pitfalls of Large Language Models as Feedback Providers: A Study of Prompt Engineering and the Quality of AI-Driven Feedback
by: Lucas Jasper Jacobsen, et al.
Published: (2025-02-01) -
Enhancing addition fact fluency in children with mild intellectual disabilities: simultaneous prompting with performance feedback
by: Nesrin Sönmez, et al.
Published: (2025-08-01)