Prompt Engineering for evaluators: optimizing LLMs to judge linguistic proficiency

Prompt Engineering, the practice of optimizing the question made to a Large Language Model, is closely linked to the evaluation procedures. Depending on the type of task we are performing through LLMs, we can have an evaluation metric with high or low reliability, making Prompt Engineering more or...

Full description

Saved in:
Bibliographic Details
Main Author: Lorenzo Gregori
Format: Article
Language:deu
Published: PUBLIA – SLUB Open Publishing 2025-07-01
Series:AI-Linguistica
Subjects:
Online Access:https://ai-ling.publia.org/ai_ling/article/view/22
Tags: Add Tag
No Tags, Be the first to tag this record!