Evaluating different translation methods: a case study of Chinese graduate students in an MTI program
This paper compares three approaches to translation: (i) from-scratch human translation (HT), (ii) machine translation combined with post-editing (MTPE), both produced by two independent groups of translation students, and (iii) machine translation (MT), produced by Baidu Chinese-to-English Translat...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Taylor & Francis Group
2025-12-01
|
| Series: | Cogent Arts & Humanities |
| Subjects: | |
| Online Access: | https://www.tandfonline.com/doi/10.1080/23311983.2025.2511386 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849471813123309568 |
|---|---|
| author | Tingting Lei Jeroen van de Weijer |
| author_facet | Tingting Lei Jeroen van de Weijer |
| author_sort | Tingting Lei |
| collection | DOAJ |
| description | This paper compares three approaches to translation: (i) from-scratch human translation (HT), (ii) machine translation combined with post-editing (MTPE), both produced by two independent groups of translation students, and (iii) machine translation (MT), produced by Baidu Chinese-to-English Translation, employing various evaluation methods. A number of unsupervised automated models (BLEU, METEOR and ROUGE) were used to compare the quality of these three types of translation outputs. Complementary to this, a subjective evaluation method was applied to calculate the frequency of various translation errors in a semi-automatic way. It was found that the quality of the three types of texts assessed by these two methods displayed similar characteristics: In this test, both MT and MTPE significantly outperformed from-scratch human translation in terms of quality, while the differences between MT and MTPE were not significant. Lexical and grammatical mistakes accounted for the highest proportion of errors in the different translated texts and after data and textual analysis different PE styles were identified. The results have implications for translation training and the further development of MT research. |
| format | Article |
| id | doaj-art-0b625feba8ef4c1a833f0639a9ef7e28 |
| institution | Kabale University |
| issn | 2331-1983 |
| language | English |
| publishDate | 2025-12-01 |
| publisher | Taylor & Francis Group |
| record_format | Article |
| series | Cogent Arts & Humanities |
| spelling | doaj-art-0b625feba8ef4c1a833f0639a9ef7e282025-08-20T03:24:43ZengTaylor & Francis GroupCogent Arts & Humanities2331-19832025-12-0112110.1080/23311983.2025.2511386Evaluating different translation methods: a case study of Chinese graduate students in an MTI programTingting Lei0Jeroen van de Weijer1College of International Studies, Shenzhen University, Shenzhen, ChinaCollege of International Studies, Shenzhen University, Shenzhen, ChinaThis paper compares three approaches to translation: (i) from-scratch human translation (HT), (ii) machine translation combined with post-editing (MTPE), both produced by two independent groups of translation students, and (iii) machine translation (MT), produced by Baidu Chinese-to-English Translation, employing various evaluation methods. A number of unsupervised automated models (BLEU, METEOR and ROUGE) were used to compare the quality of these three types of translation outputs. Complementary to this, a subjective evaluation method was applied to calculate the frequency of various translation errors in a semi-automatic way. It was found that the quality of the three types of texts assessed by these two methods displayed similar characteristics: In this test, both MT and MTPE significantly outperformed from-scratch human translation in terms of quality, while the differences between MT and MTPE were not significant. Lexical and grammatical mistakes accounted for the highest proportion of errors in the different translated texts and after data and textual analysis different PE styles were identified. The results have implications for translation training and the further development of MT research.https://www.tandfonline.com/doi/10.1080/23311983.2025.2511386Error frequencieshuman translationmachine translationpost-editingtranslation trainingunsupervised evaluation metrics |
| spellingShingle | Tingting Lei Jeroen van de Weijer Evaluating different translation methods: a case study of Chinese graduate students in an MTI program Cogent Arts & Humanities Error frequencies human translation machine translation post-editing translation training unsupervised evaluation metrics |
| title | Evaluating different translation methods: a case study of Chinese graduate students in an MTI program |
| title_full | Evaluating different translation methods: a case study of Chinese graduate students in an MTI program |
| title_fullStr | Evaluating different translation methods: a case study of Chinese graduate students in an MTI program |
| title_full_unstemmed | Evaluating different translation methods: a case study of Chinese graduate students in an MTI program |
| title_short | Evaluating different translation methods: a case study of Chinese graduate students in an MTI program |
| title_sort | evaluating different translation methods a case study of chinese graduate students in an mti program |
| topic | Error frequencies human translation machine translation post-editing translation training unsupervised evaluation metrics |
| url | https://www.tandfonline.com/doi/10.1080/23311983.2025.2511386 |
| work_keys_str_mv | AT tingtinglei evaluatingdifferenttranslationmethodsacasestudyofchinesegraduatestudentsinanmtiprogram AT jeroenvandeweijer evaluatingdifferenttranslationmethodsacasestudyofchinesegraduatestudentsinanmtiprogram |