TURKISH-TO-ENGLISH SHORT STORY TRANSLATION BY DEEPL: HUMAN EVALUATION BY TRAINEES AND TRANSLATION PROFESSIONALS VS. AUTOMATIC EVALUATION

This mixed-methods study aims to evaluate the quality of Turkish-to-English literary machine translation by DeepL, incorporating both human and automatic evaluation metrics while engaging translation trainees and professional translators. Raw MT output of two short stories, Mendil Altında and Kabak...

Full description

Saved in:
Bibliographic Details
Main Author: Halise Gülmüş Sırkıntı
Format: Article
Language:English
Published: New Bulgarian University 2025-06-01
Series:English Studies at NBU
Subjects:
Online Access:https://esnbu.org/data/files/2025/esnbu.25.1.2.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850164443942486016
author Halise Gülmüş Sırkıntı
author_facet Halise Gülmüş Sırkıntı
author_sort Halise Gülmüş Sırkıntı
collection DOAJ
description This mixed-methods study aims to evaluate the quality of Turkish-to-English literary machine translation by DeepL, incorporating both human and automatic evaluation metrics while engaging translation trainees and professional translators. Raw MT output of two short stories, Mendil Altında and Kabak Çekirdekçi, evaluated by both groups via TAUS DQF tool and evaluators wrote reports on the detected errors. Additionally, BLEU was employed for automatic evaluation. The results indicate a consensus between trainees and professionals in assessing MT accuracy and fluency. Accuracy rates were 80.59% and 80.50% for Mendil Altında, and 73.08% and 82.35% for Kabak Çekirdekçi. Fluency rates were similarly close, 71.96% and 72.32% for Mendil Altında, and 66.81% and 62.09% for Kabak Çekirdekçi. Bleu scores, particularly 1-gram results, align with the human evaluators' results. Furthermore, reports show that trainees provided more detailed analysis, frequently using meta-language, suggesting that increased exposure to metrics enhances trainees' ability to identify fine-grained MT errors.
format Article
id doaj-art-d449bb0f0d264afbb03f5f8bb3d10c83
institution OA Journals
issn 2367-5705
2367-8704
language English
publishDate 2025-06-01
publisher New Bulgarian University
record_format Article
series English Studies at NBU
spelling doaj-art-d449bb0f0d264afbb03f5f8bb3d10c832025-08-20T02:21:58ZengNew Bulgarian UniversityEnglish Studies at NBU2367-57052367-87042025-06-01111174210.33919/esnbu.25.1.2TURKISH-TO-ENGLISH SHORT STORY TRANSLATION BY DEEPL: HUMAN EVALUATION BY TRAINEES AND TRANSLATION PROFESSIONALS VS. AUTOMATIC EVALUATIONHalise Gülmüş Sırkıntı0https://orcid.org/0000-0002-6585-5961Marmara University, Istanbul, Türkiye This mixed-methods study aims to evaluate the quality of Turkish-to-English literary machine translation by DeepL, incorporating both human and automatic evaluation metrics while engaging translation trainees and professional translators. Raw MT output of two short stories, Mendil Altında and Kabak Çekirdekçi, evaluated by both groups via TAUS DQF tool and evaluators wrote reports on the detected errors. Additionally, BLEU was employed for automatic evaluation. The results indicate a consensus between trainees and professionals in assessing MT accuracy and fluency. Accuracy rates were 80.59% and 80.50% for Mendil Altında, and 73.08% and 82.35% for Kabak Çekirdekçi. Fluency rates were similarly close, 71.96% and 72.32% for Mendil Altında, and 66.81% and 62.09% for Kabak Çekirdekçi. Bleu scores, particularly 1-gram results, align with the human evaluators' results. Furthermore, reports show that trainees provided more detailed analysis, frequently using meta-language, suggesting that increased exposure to metrics enhances trainees' ability to identify fine-grained MT errors.https://esnbu.org/data/files/2025/esnbu.25.1.2.pdfliterary translationmachine translation evaluationhuman evaluationautomatic evaluationbleu
spellingShingle Halise Gülmüş Sırkıntı
TURKISH-TO-ENGLISH SHORT STORY TRANSLATION BY DEEPL: HUMAN EVALUATION BY TRAINEES AND TRANSLATION PROFESSIONALS VS. AUTOMATIC EVALUATION
English Studies at NBU
literary translation
machine translation evaluation
human evaluation
automatic evaluation
bleu
title TURKISH-TO-ENGLISH SHORT STORY TRANSLATION BY DEEPL: HUMAN EVALUATION BY TRAINEES AND TRANSLATION PROFESSIONALS VS. AUTOMATIC EVALUATION
title_full TURKISH-TO-ENGLISH SHORT STORY TRANSLATION BY DEEPL: HUMAN EVALUATION BY TRAINEES AND TRANSLATION PROFESSIONALS VS. AUTOMATIC EVALUATION
title_fullStr TURKISH-TO-ENGLISH SHORT STORY TRANSLATION BY DEEPL: HUMAN EVALUATION BY TRAINEES AND TRANSLATION PROFESSIONALS VS. AUTOMATIC EVALUATION
title_full_unstemmed TURKISH-TO-ENGLISH SHORT STORY TRANSLATION BY DEEPL: HUMAN EVALUATION BY TRAINEES AND TRANSLATION PROFESSIONALS VS. AUTOMATIC EVALUATION
title_short TURKISH-TO-ENGLISH SHORT STORY TRANSLATION BY DEEPL: HUMAN EVALUATION BY TRAINEES AND TRANSLATION PROFESSIONALS VS. AUTOMATIC EVALUATION
title_sort turkish to english short story translation by deepl human evaluation by trainees and translation professionals vs automatic evaluation
topic literary translation
machine translation evaluation
human evaluation
automatic evaluation
bleu
url https://esnbu.org/data/files/2025/esnbu.25.1.2.pdf
work_keys_str_mv AT halisegulmussırkıntı turkishtoenglishshortstorytranslationbydeeplhumanevaluationbytraineesandtranslationprofessionalsvsautomaticevaluation