Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.

<h4>Aim</h4>Crowdsourcing is the process of outsourcing numerous tasks to many untrained individuals. Our aim was to assess the performance and repeatability of crowdsourcing for the classification of retinal fundus photography.<h4>Methods</h4>One hundred retinal fundus photo...

Full description

Saved in:
Bibliographic Details
Main Authors: Danny Mitry, Tunde Peto, Shabina Hayat, James E Morgan, Kay-Tee Khaw, Paul J Foster
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2013-01-01
Series:PLoS ONE
Online Access:https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0071154&type=printable
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849331910879215616
author Danny Mitry
Tunde Peto
Shabina Hayat
James E Morgan
Kay-Tee Khaw
Paul J Foster
author_facet Danny Mitry
Tunde Peto
Shabina Hayat
James E Morgan
Kay-Tee Khaw
Paul J Foster
author_sort Danny Mitry
collection DOAJ
description <h4>Aim</h4>Crowdsourcing is the process of outsourcing numerous tasks to many untrained individuals. Our aim was to assess the performance and repeatability of crowdsourcing for the classification of retinal fundus photography.<h4>Methods</h4>One hundred retinal fundus photograph images with pre-determined disease criteria were selected by experts from a large cohort study. After reading brief instructions and an example classification, we requested that knowledge workers (KWs) from a crowdsourcing platform classified each image as normal or abnormal with grades of severity. Each image was classified 20 times by different KWs. Four study designs were examined to assess the effect of varying incentive and KW experience in classification accuracy. All study designs were conducted twice to examine repeatability. Performance was assessed by comparing the sensitivity, specificity and area under the receiver operating characteristic curve (AUC).<h4>Results</h4>Without restriction on eligible participants, two thousand classifications of 100 images were received in under 24 hours at minimal cost. In trial 1 all study designs had an AUC (95%CI) of 0.701(0.680-0.721) or greater for classification of normal/abnormal. In trial 1, the highest AUC (95%CI) for normal/abnormal classification was 0.757 (0.738-0.776) for KWs with moderate experience. Comparable results were observed in trial 2. In trial 1, between 64-86% of any abnormal image was correctly classified by over half of all KWs. In trial 2, this ranged between 74-97%. Sensitivity was ≥ 96% for normal versus severely abnormal detections across all trials. Sensitivity for normal versus mildly abnormal varied between 61-79% across trials.<h4>Conclusions</h4>With minimal training, crowdsourcing represents an accurate, rapid and cost-effective method of retinal image analysis which demonstrates good repeatability. Larger studies with more comprehensive participant training are needed to explore the utility of this compelling technique in large scale medical image analysis.
format Article
id doaj-art-98ab8e11f5e24b52baf406ebdea98529
institution Kabale University
issn 1932-6203
language English
publishDate 2013-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj-art-98ab8e11f5e24b52baf406ebdea985292025-08-20T03:46:23ZengPublic Library of Science (PLoS)PLoS ONE1932-62032013-01-0188e7115410.1371/journal.pone.0071154Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.Danny MitryTunde PetoShabina HayatJames E MorganKay-Tee KhawPaul J Foster<h4>Aim</h4>Crowdsourcing is the process of outsourcing numerous tasks to many untrained individuals. Our aim was to assess the performance and repeatability of crowdsourcing for the classification of retinal fundus photography.<h4>Methods</h4>One hundred retinal fundus photograph images with pre-determined disease criteria were selected by experts from a large cohort study. After reading brief instructions and an example classification, we requested that knowledge workers (KWs) from a crowdsourcing platform classified each image as normal or abnormal with grades of severity. Each image was classified 20 times by different KWs. Four study designs were examined to assess the effect of varying incentive and KW experience in classification accuracy. All study designs were conducted twice to examine repeatability. Performance was assessed by comparing the sensitivity, specificity and area under the receiver operating characteristic curve (AUC).<h4>Results</h4>Without restriction on eligible participants, two thousand classifications of 100 images were received in under 24 hours at minimal cost. In trial 1 all study designs had an AUC (95%CI) of 0.701(0.680-0.721) or greater for classification of normal/abnormal. In trial 1, the highest AUC (95%CI) for normal/abnormal classification was 0.757 (0.738-0.776) for KWs with moderate experience. Comparable results were observed in trial 2. In trial 1, between 64-86% of any abnormal image was correctly classified by over half of all KWs. In trial 2, this ranged between 74-97%. Sensitivity was ≥ 96% for normal versus severely abnormal detections across all trials. Sensitivity for normal versus mildly abnormal varied between 61-79% across trials.<h4>Conclusions</h4>With minimal training, crowdsourcing represents an accurate, rapid and cost-effective method of retinal image analysis which demonstrates good repeatability. Larger studies with more comprehensive participant training are needed to explore the utility of this compelling technique in large scale medical image analysis.https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0071154&type=printable
spellingShingle Danny Mitry
Tunde Peto
Shabina Hayat
James E Morgan
Kay-Tee Khaw
Paul J Foster
Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.
PLoS ONE
title Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.
title_full Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.
title_fullStr Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.
title_full_unstemmed Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.
title_short Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.
title_sort crowdsourcing as a novel technique for retinal fundus photography classification analysis of images in the epic norfolk cohort on behalf of the uk biobank eye and vision consortium
url https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0071154&type=printable
work_keys_str_mv AT dannymitry crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT tundepeto crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT shabinahayat crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT jamesemorgan crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT kayteekhaw crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium
AT pauljfoster crowdsourcingasanoveltechniqueforretinalfundusphotographyclassificationanalysisofimagesintheepicnorfolkcohortonbehalfoftheukbiobankeyeandvisionconsortium