Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset

Abstract We present the CREATTIVE3D dataset of human interaction and navigation at road crossings in virtual reality. The dataset has three main breakthroughs: (1) it is the largest dataset of human motion in fully-annotated scenarios (40 hours, 2.6 million poses), (2) it is captured in dynamic 3D s...

Full description

Saved in:
Bibliographic Details
Main Authors: Hui-Yin Wu, Florent Robert, Franz Franco Gallo, Kateryna Pirkovets, Clément Quéré, Johanna Delachambre, Stephen Ramanoël, Auriane Gros, Marco Winckler, Lucile Sassatelli, Meggy Hayotte, Aline Menin, Pierre Kornprobst
Format: Article
Language:English
Published: Nature Portfolio 2025-02-01
Series:Scientific Data
Online Access:https://doi.org/10.1038/s41597-025-04560-5
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract We present the CREATTIVE3D dataset of human interaction and navigation at road crossings in virtual reality. The dataset has three main breakthroughs: (1) it is the largest dataset of human motion in fully-annotated scenarios (40 hours, 2.6 million poses), (2) it is captured in dynamic 3D scenes with multivariate – gaze, physiology, and motion – data, and (3) it investigates the impact of simulated low-vision conditions using dynamic eye tracking under real walking and simulated walking conditions. Extensive effort has been made to ensure the transparency, usability, and reproducibility of the study and collected data, even under extremely complex study conditions involving 6 degrees of freedom interactions, and multiple sensors. We believe this will allow studies using the same or similar protocols to be comparable to existing study results, and allow a much more fine-grained analysis of individual nuances of user behavior across datasets or study designs. This is what we call a living contextual dataset.
ISSN:2052-4463