Understanding the effects of human-written paraphrases in LLM-generated text detection

Natural Language Generation has been rapidly developing with the advent of large language models (LLMs). While their usage has sparked significant attention from the general public, it is important for readers to be aware when a piece of text is LLM-generated. This has brought about the need for bui...

Full description

Saved in:
Bibliographic Details
Main Authors: Hiu Ting Lau, Arkaitz Zubiaga
Format: Article
Language:English
Published: Elsevier 2025-06-01
Series:Natural Language Processing Journal
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2949719125000275
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849688018515918848
author Hiu Ting Lau
Arkaitz Zubiaga
author_facet Hiu Ting Lau
Arkaitz Zubiaga
author_sort Hiu Ting Lau
collection DOAJ
description Natural Language Generation has been rapidly developing with the advent of large language models (LLMs). While their usage has sparked significant attention from the general public, it is important for readers to be aware when a piece of text is LLM-generated. This has brought about the need for building models that enable automated LLM-generated text detection, with the aim of mitigating potential negative outcomes of such content. Existing LLM-generated detectors show competitive performances in telling apart LLM-generated and human-written text, but this performance is likely to deteriorate when paraphrased texts are considered. In this study, we devise a new data collection strategy to collect Human & LLM Paraphrase Collection (HLPC), a first-of-its-kind dataset that incorporates human-written texts and paraphrases, as well as LLM-generated texts and paraphrases. With the aim of understanding the effects of human-written paraphrases on the performance of SOTA LLM-generated text detectors OpenAI RoBERTa and watermark detectors, we perform classification experiments that incorporate human-written paraphrases, watermarked and non-watermarked LLM-generated documents from GPT and OPT, and LLM-generated paraphrases from DIPPER and BART. The results show that the inclusion of human-written paraphrases has a significant impact of LLM-generated detector performance, promoting TPR@1%FPR with a possible trade-off of AUROC and accuracy.
format Article
id doaj-art-6b41242b46ed4ca5ba7ebfc9538f718b
institution DOAJ
issn 2949-7191
language English
publishDate 2025-06-01
publisher Elsevier
record_format Article
series Natural Language Processing Journal
spelling doaj-art-6b41242b46ed4ca5ba7ebfc9538f718b2025-08-20T03:22:09ZengElsevierNatural Language Processing Journal2949-71912025-06-011110015110.1016/j.nlp.2025.100151Understanding the effects of human-written paraphrases in LLM-generated text detectionHiu Ting Lau0Arkaitz Zubiaga1School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, United KingdomCorresponding author.; School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, United KingdomNatural Language Generation has been rapidly developing with the advent of large language models (LLMs). While their usage has sparked significant attention from the general public, it is important for readers to be aware when a piece of text is LLM-generated. This has brought about the need for building models that enable automated LLM-generated text detection, with the aim of mitigating potential negative outcomes of such content. Existing LLM-generated detectors show competitive performances in telling apart LLM-generated and human-written text, but this performance is likely to deteriorate when paraphrased texts are considered. In this study, we devise a new data collection strategy to collect Human & LLM Paraphrase Collection (HLPC), a first-of-its-kind dataset that incorporates human-written texts and paraphrases, as well as LLM-generated texts and paraphrases. With the aim of understanding the effects of human-written paraphrases on the performance of SOTA LLM-generated text detectors OpenAI RoBERTa and watermark detectors, we perform classification experiments that incorporate human-written paraphrases, watermarked and non-watermarked LLM-generated documents from GPT and OPT, and LLM-generated paraphrases from DIPPER and BART. The results show that the inclusion of human-written paraphrases has a significant impact of LLM-generated detector performance, promoting TPR@1%FPR with a possible trade-off of AUROC and accuracy.http://www.sciencedirect.com/science/article/pii/S2949719125000275LLM-generated text detectionHuman-written paraphrasesLarge language models
spellingShingle Hiu Ting Lau
Arkaitz Zubiaga
Understanding the effects of human-written paraphrases in LLM-generated text detection
Natural Language Processing Journal
LLM-generated text detection
Human-written paraphrases
Large language models
title Understanding the effects of human-written paraphrases in LLM-generated text detection
title_full Understanding the effects of human-written paraphrases in LLM-generated text detection
title_fullStr Understanding the effects of human-written paraphrases in LLM-generated text detection
title_full_unstemmed Understanding the effects of human-written paraphrases in LLM-generated text detection
title_short Understanding the effects of human-written paraphrases in LLM-generated text detection
title_sort understanding the effects of human written paraphrases in llm generated text detection
topic LLM-generated text detection
Human-written paraphrases
Large language models
url http://www.sciencedirect.com/science/article/pii/S2949719125000275
work_keys_str_mv AT hiutinglau understandingtheeffectsofhumanwrittenparaphrasesinllmgeneratedtextdetection
AT arkaitzzubiaga understandingtheeffectsofhumanwrittenparaphrasesinllmgeneratedtextdetection