Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching
In this short article we present our manual annotation of the eye movement events in a subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations, saccades, and smooth pursuits, as well as a noise event type (the latter representing either blinks, loss of tracking, or p...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2020-12-01
|
| Series: | Journal of Eye Movement Research |
| Subjects: | |
| Online Access: | https://bop.unibe.ch/JEMR/article/view/6008 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850189722933002240 |
|---|---|
| author | Ioannis Agtzidis Mikhail Startsev Michael Dorr |
| author_facet | Ioannis Agtzidis Mikhail Startsev Michael Dorr |
| author_sort | Ioannis Agtzidis |
| collection | DOAJ |
| description | In this short article we present our manual annotation of the eye movement events in a subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations, saccades, and smooth pursuits, as well as a noise event type (the latter representing either blinks, loss of tracking, or physically implausible signals). In order to achieve more consistent annotations, the gaze samples were labelled by a novice rater based on rudimentary algorithmic suggestions, and subsequently corrected by an expert rater. Overall, we annotated eye movement events in the recordings corresponding to 50 randomly selected test set clips and 6 training set clips from Hollywood2, which were viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data. In these labels, 62.4% of the samples were attributed to fixations, 9.1% – to saccades, and, notably, 24.2% – to pursuit (the remainder marked as noise). After evaluation of 15 published eye movement classification algorithms on our newly collected annotated data set, we found that the most recent algorithms perform very well on average, and even reach human-level labelling quality for fixations and saccades, but all have a much larger room for improvement when it comes to smooth pursuit classification. The data set is made available at https://gin.g- node.org/ioannis.agtzidis/hollywood2_em. |
| format | Article |
| id | doaj-art-2e0a95f16e4b4b9eb79641427e5d8710 |
| institution | OA Journals |
| issn | 1995-8692 |
| language | English |
| publishDate | 2020-12-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Journal of Eye Movement Research |
| spelling | doaj-art-2e0a95f16e4b4b9eb79641427e5d87102025-08-20T02:15:32ZengMDPI AGJournal of Eye Movement Research1995-86922020-12-0113410.16910/jemr.13.4.5Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watchingIoannis Agtzidis0Mikhail Startsev1Michael Dorr2Technical University of Munich, GermanyTechnical University of Munich, GermanyTechnical University of Munich, GermanyIn this short article we present our manual annotation of the eye movement events in a subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations, saccades, and smooth pursuits, as well as a noise event type (the latter representing either blinks, loss of tracking, or physically implausible signals). In order to achieve more consistent annotations, the gaze samples were labelled by a novice rater based on rudimentary algorithmic suggestions, and subsequently corrected by an expert rater. Overall, we annotated eye movement events in the recordings corresponding to 50 randomly selected test set clips and 6 training set clips from Hollywood2, which were viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data. In these labels, 62.4% of the samples were attributed to fixations, 9.1% – to saccades, and, notably, 24.2% – to pursuit (the remainder marked as noise). After evaluation of 15 published eye movement classification algorithms on our newly collected annotated data set, we found that the most recent algorithms perform very well on average, and even reach human-level labelling quality for fixations and saccades, but all have a much larger room for improvement when it comes to smooth pursuit classification. The data set is made available at https://gin.g- node.org/ioannis.agtzidis/hollywood2_em.https://bop.unibe.ch/JEMR/article/view/6008Eye trackingeye movementgazesmooth pursuiteye movement classificationhand-labelling |
| spellingShingle | Ioannis Agtzidis Mikhail Startsev Michael Dorr Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching Journal of Eye Movement Research Eye tracking eye movement gaze smooth pursuit eye movement classification hand-labelling |
| title | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
| title_full | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
| title_fullStr | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
| title_full_unstemmed | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
| title_short | Two hours in Hollywood: A manually annotated ground truth data set of eye movements during movie clip watching |
| title_sort | two hours in hollywood a manually annotated ground truth data set of eye movements during movie clip watching |
| topic | Eye tracking eye movement gaze smooth pursuit eye movement classification hand-labelling |
| url | https://bop.unibe.ch/JEMR/article/view/6008 |
| work_keys_str_mv | AT ioannisagtzidis twohoursinhollywoodamanuallyannotatedgroundtruthdatasetofeyemovementsduringmovieclipwatching AT mikhailstartsev twohoursinhollywoodamanuallyannotatedgroundtruthdatasetofeyemovementsduringmovieclipwatching AT michaeldorr twohoursinhollywoodamanuallyannotatedgroundtruthdatasetofeyemovementsduringmovieclipwatching |