How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey

Abstract Background Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images. Methods We conducted a pen-and-paper survey involving medical students who ha...

Full description

Saved in:
Bibliographic Details
Main Authors: Antonija Mijatović, Marija Franka Žuljević, Luka Ursić, Nensi Bralić, Miro Vuković, Marija Roguljić, Ana Marušić
Format: Article
Language:English
Published: BMC 2025-08-01
Series:Research Integrity and Peer Review
Subjects:
Online Access:https://doi.org/10.1186/s41073-025-00172-0
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849389510099468288
author Antonija Mijatović
Marija Franka Žuljević
Luka Ursić
Nensi Bralić
Miro Vuković
Marija Roguljić
Ana Marušić
author_facet Antonija Mijatović
Marija Franka Žuljević
Luka Ursić
Nensi Bralić
Miro Vuković
Marija Roguljić
Ana Marušić
author_sort Antonija Mijatović
collection DOAJ
description Abstract Background Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images. Methods We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications. Results A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8–13), and made 2 mistakes (IQR = 1–4), whereas the researchers identified a median of 11 duplications (IQR = 8–14) and made 1 mistake (IQR = 1–3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff’s δ = 0.126) or the number of mistakes (p = .731, Cliff’s δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen’s d = 0.72; p < .005 and Cohen’s d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085). Conclusions Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.
format Article
id doaj-art-32546cb251ba4e14a2988ad64323930a
institution Kabale University
issn 2058-8615
language English
publishDate 2025-08-01
publisher BMC
record_format Article
series Research Integrity and Peer Review
spelling doaj-art-32546cb251ba4e14a2988ad64323930a2025-08-20T03:41:57ZengBMCResearch Integrity and Peer Review2058-86152025-08-011011710.1186/s41073-025-00172-0How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional surveyAntonija Mijatović0Marija Franka Žuljević1Luka Ursić2Nensi Bralić3Miro Vuković4Marija Roguljić5Ana Marušić6Department of Research in Biomedicine and Health, Center for Evidence-Based Medicine, School of Medicine, University of SplitDepartment of Medical Humanities, Center for Evidence-Based Medicine, School of Medicine, University of SplitDepartment of Research in Biomedicine and Health, Center for Evidence-Based Medicine, School of Medicine, University of SplitDepartment of Research in Biomedicine and Health, Center for Evidence-Based Medicine, School of Medicine, University of SplitDepartment of Research in Biomedicine and Health, Center for Evidence-Based Medicine, School of Medicine, University of SplitDepartment of Periodontology, Study of Dental Medicine, School of Medicine, University of SplitDepartment of Research in Biomedicine and Health, Center for Evidence-Based Medicine, School of Medicine, University of SplitAbstract Background Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images. Methods We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications. Results A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8–13), and made 2 mistakes (IQR = 1–4), whereas the researchers identified a median of 11 duplications (IQR = 8–14) and made 1 mistake (IQR = 1–3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff’s δ = 0.126) or the number of mistakes (p = .731, Cliff’s δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen’s d = 0.72; p < .005 and Cohen’s d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085). Conclusions Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.https://doi.org/10.1186/s41073-025-00172-0Image manipulationImage duplicationsMedical educationCross-sectional survey
spellingShingle Antonija Mijatović
Marija Franka Žuljević
Luka Ursić
Nensi Bralić
Miro Vuković
Marija Roguljić
Ana Marušić
How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey
Research Integrity and Peer Review
Image manipulation
Image duplications
Medical education
Cross-sectional survey
title How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey
title_full How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey
title_fullStr How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey
title_full_unstemmed How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey
title_short How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey
title_sort how good are medical students and researchers in detecting duplications in digital images from research articles a cross sectional survey
topic Image manipulation
Image duplications
Medical education
Cross-sectional survey
url https://doi.org/10.1186/s41073-025-00172-0
work_keys_str_mv AT antonijamijatovic howgoodaremedicalstudentsandresearchersindetectingduplicationsindigitalimagesfromresearcharticlesacrosssectionalsurvey
AT marijafrankazuljevic howgoodaremedicalstudentsandresearchersindetectingduplicationsindigitalimagesfromresearcharticlesacrosssectionalsurvey
AT lukaursic howgoodaremedicalstudentsandresearchersindetectingduplicationsindigitalimagesfromresearcharticlesacrosssectionalsurvey
AT nensibralic howgoodaremedicalstudentsandresearchersindetectingduplicationsindigitalimagesfromresearcharticlesacrosssectionalsurvey
AT mirovukovic howgoodaremedicalstudentsandresearchersindetectingduplicationsindigitalimagesfromresearcharticlesacrosssectionalsurvey
AT marijaroguljic howgoodaremedicalstudentsandresearchersindetectingduplicationsindigitalimagesfromresearcharticlesacrosssectionalsurvey
AT anamarusic howgoodaremedicalstudentsandresearchersindetectingduplicationsindigitalimagesfromresearcharticlesacrosssectionalsurvey