Toward the validation of crowdsourced experiments for lightness perception.

Crowdsource platforms have been used to study a range of perceptual stimuli such as the graphical perception of scatterplots and various aspects of human color perception. Given the lack of control over a crowdsourced participant's experimental setup, there are valid concerns on the use of crow...

Full description

Saved in:
Bibliographic Details
Main Authors: Emily N Stark, Terece L Turton, Jonah Miller, Elan Barenholtz, Sang Hong, Roxana Bujack
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2024-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0315853
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850081424050225152
author Emily N Stark
Terece L Turton
Jonah Miller
Elan Barenholtz
Sang Hong
Roxana Bujack
author_facet Emily N Stark
Terece L Turton
Jonah Miller
Elan Barenholtz
Sang Hong
Roxana Bujack
author_sort Emily N Stark
collection DOAJ
description Crowdsource platforms have been used to study a range of perceptual stimuli such as the graphical perception of scatterplots and various aspects of human color perception. Given the lack of control over a crowdsourced participant's experimental setup, there are valid concerns on the use of crowdsourcing for color studies as the perception of the stimuli is highly dependent on the stimulus presentation. Here, we propose that the error due to a crowdsourced experimental design can be effectively averaged out because the crowdsourced experiment can be accommodated by the Thurstonian model as the convolution of two normal distributions, one that is perceptual in nature and one that captures the error due to variability in stimulus presentation. Based on this, we provide a mathematical estimate for the sample size needed to produce a crowdsourced experiment with the same power as the corresponding in-person study. We tested this claim by replicating a large-scale, crowdsourced study of human lightness perception with a diverse sample with a highly controlled, in-person study with a sample taken from psychology undergraduates. Our claim was supported by the replication of the results from the latter. These findings suggest that, with sufficient sample size, color vision studies may be completed online, giving access to a larger and more representative sample. With this framework at hand, experimentalists have the validation that choosing either many online participants or few in person participants will not sacrifice the impact of their results.
format Article
id doaj-art-dea628fb163a435dade1f4d38eaddebf
institution DOAJ
issn 1932-6203
language English
publishDate 2024-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj-art-dea628fb163a435dade1f4d38eaddebf2025-08-20T02:44:43ZengPublic Library of Science (PLoS)PLoS ONE1932-62032024-01-011912e031585310.1371/journal.pone.0315853Toward the validation of crowdsourced experiments for lightness perception.Emily N StarkTerece L TurtonJonah MillerElan BarenholtzSang HongRoxana BujackCrowdsource platforms have been used to study a range of perceptual stimuli such as the graphical perception of scatterplots and various aspects of human color perception. Given the lack of control over a crowdsourced participant's experimental setup, there are valid concerns on the use of crowdsourcing for color studies as the perception of the stimuli is highly dependent on the stimulus presentation. Here, we propose that the error due to a crowdsourced experimental design can be effectively averaged out because the crowdsourced experiment can be accommodated by the Thurstonian model as the convolution of two normal distributions, one that is perceptual in nature and one that captures the error due to variability in stimulus presentation. Based on this, we provide a mathematical estimate for the sample size needed to produce a crowdsourced experiment with the same power as the corresponding in-person study. We tested this claim by replicating a large-scale, crowdsourced study of human lightness perception with a diverse sample with a highly controlled, in-person study with a sample taken from psychology undergraduates. Our claim was supported by the replication of the results from the latter. These findings suggest that, with sufficient sample size, color vision studies may be completed online, giving access to a larger and more representative sample. With this framework at hand, experimentalists have the validation that choosing either many online participants or few in person participants will not sacrifice the impact of their results.https://doi.org/10.1371/journal.pone.0315853
spellingShingle Emily N Stark
Terece L Turton
Jonah Miller
Elan Barenholtz
Sang Hong
Roxana Bujack
Toward the validation of crowdsourced experiments for lightness perception.
PLoS ONE
title Toward the validation of crowdsourced experiments for lightness perception.
title_full Toward the validation of crowdsourced experiments for lightness perception.
title_fullStr Toward the validation of crowdsourced experiments for lightness perception.
title_full_unstemmed Toward the validation of crowdsourced experiments for lightness perception.
title_short Toward the validation of crowdsourced experiments for lightness perception.
title_sort toward the validation of crowdsourced experiments for lightness perception
url https://doi.org/10.1371/journal.pone.0315853
work_keys_str_mv AT emilynstark towardthevalidationofcrowdsourcedexperimentsforlightnessperception
AT terecelturton towardthevalidationofcrowdsourcedexperimentsforlightnessperception
AT jonahmiller towardthevalidationofcrowdsourcedexperimentsforlightnessperception
AT elanbarenholtz towardthevalidationofcrowdsourcedexperimentsforlightnessperception
AT sanghong towardthevalidationofcrowdsourcedexperimentsforlightnessperception
AT roxanabujack towardthevalidationofcrowdsourcedexperimentsforlightnessperception