Laurel–Yanny percept affects the speech-to-song illusion, but musical anhedonia does not

Abstract Some spoken phrases, when heard repeatedly, seem to transform into music, in a classic finding known as the speech-to-song illusion. Repeated listening to musical phrases can also lead to changes in liking, attributed to learning-related reduction of prediction errors generated by the dopam...

Full description

Saved in:
Bibliographic Details
Main Authors: Nicholas Kathios, Benjamin M. Kubit, Nicole Grout, Jake Everard, Emma Zachary, Arushi Sankhe, Adam Tierney, Aniruddh D. Patel, Psyche Loui
Format: Article
Language:English
Published: Nature Portfolio 2025-08-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-15592-7
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Some spoken phrases, when heard repeatedly, seem to transform into music, in a classic finding known as the speech-to-song illusion. Repeated listening to musical phrases can also lead to changes in liking, attributed to learning-related reduction of prediction errors generated by the dopaminergic reward system. Does repeating spoken phrases also result in changes in liking? Here we tested whether repeated presentation of spoken phrases can lead to changes in liking as well as in musicality, and whether these changes might vary with musical reward sensitivity. We also asked whether perceptual biases towards low versus high frequencies, as assessed using the Laurel/Yanny illusion, are linked to changes in musicality and liking with repetition. Results show a general reduction in liking for spoken phrases with repetition, but less so for phrases that transition more readily into song. People who upweight low frequencies in speech perception (and so perceive Laurel rather than Yanny) are more susceptible to changes in musicality with phrase repetition and marginally less susceptible to changes in liking. Individuals with musical anhedonia still perceived the speech-to-song illusion, but liked all spoken phrases less; this did not interact with repetition. These results show a dissociation between perception and emotional sensitivity to music, and support a model of frequency-weighted internal predictions for acoustic signals that might drive the speech-to-song illusion. Rather than treating illusions as isolated curiosities, we can use them as a window into theoretical debates surrounding models of perception and emotion.
ISSN:2045-2322