Facial cues to anger affect meaning interpretation of subsequent spoken prosody

In everyday life, visual information often precedes the auditory one, hence influencing its evaluation (e.g., seeing somebody’s angry face makes us expect them to speak to us angrily). By using the cross-modal affective paradigm, we investigated the influence of facial gestures when the subsequent a...

Full description

Saved in:
Bibliographic Details
Main Authors: Caterina Petrone, Francesca Carbone, Nicolas Audibert, Maud Champagne-Lavau
Format: Article
Language:English
Published: Cambridge University Press 2024-12-01
Series:Language and Cognition
Subjects:
Online Access:https://www.cambridge.org/core/product/identifier/S1866980824000036/type/journal_article
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850174608405168128
author Caterina Petrone
Francesca Carbone
Nicolas Audibert
Maud Champagne-Lavau
author_facet Caterina Petrone
Francesca Carbone
Nicolas Audibert
Maud Champagne-Lavau
author_sort Caterina Petrone
collection DOAJ
description In everyday life, visual information often precedes the auditory one, hence influencing its evaluation (e.g., seeing somebody’s angry face makes us expect them to speak to us angrily). By using the cross-modal affective paradigm, we investigated the influence of facial gestures when the subsequent acoustic signal is emotionally unclear (neutral or produced with a limited repertoire of cues to anger). Auditory stimuli spoken with angry or neutral prosody were presented in isolation or preceded by pictures showing emotionally related or unrelated facial gestures (angry or neutral faces). In two experiments, participants rated the valence and emotional intensity of the auditory stimuli only. These stimuli were created from acted speech from movies and delexicalized via speech synthesis, then manipulated by partially preserving or degrading their global spectral characteristics. All participants relied on facial cues when the auditory stimuli were acoustically impoverished; however, only a subgroup of participants used angry faces to interpret subsequent neutral prosody. Thus, listeners are sensitive to facial cues for evaluating what they are about to hear, especially when the auditory input is less reliable. These results extend findings on face perception to the auditory domain and confirm inter-individual variability in considering different sources of emotional information.
format Article
id doaj-art-e4c90ed1bf4e4984a0643506183197c4
institution OA Journals
issn 1866-9808
1866-9859
language English
publishDate 2024-12-01
publisher Cambridge University Press
record_format Article
series Language and Cognition
spelling doaj-art-e4c90ed1bf4e4984a0643506183197c42025-08-20T02:19:37ZengCambridge University PressLanguage and Cognition1866-98081866-98592024-12-01161214123710.1017/langcog.2024.3Facial cues to anger affect meaning interpretation of subsequent spoken prosodyCaterina Petrone0https://orcid.org/0000-0002-2613-7609Francesca Carbone1Nicolas Audibert2Maud Champagne-Lavau3CNRS, LPL, UMR 7309, Aix-Marseille Université, Aix-en-Provence, FranceCNRS, LPL, UMR 7309, Aix-Marseille Université, Aix-en-Provence, France School of Psychology, University of Kent, Canterbury, UKLaboratoire de Phonétique et Phonologie, CNRS & Sorbonne Nouvelle, Paris, FranceCNRS, LPL, UMR 7309, Aix-Marseille Université, Aix-en-Provence, FranceIn everyday life, visual information often precedes the auditory one, hence influencing its evaluation (e.g., seeing somebody’s angry face makes us expect them to speak to us angrily). By using the cross-modal affective paradigm, we investigated the influence of facial gestures when the subsequent acoustic signal is emotionally unclear (neutral or produced with a limited repertoire of cues to anger). Auditory stimuli spoken with angry or neutral prosody were presented in isolation or preceded by pictures showing emotionally related or unrelated facial gestures (angry or neutral faces). In two experiments, participants rated the valence and emotional intensity of the auditory stimuli only. These stimuli were created from acted speech from movies and delexicalized via speech synthesis, then manipulated by partially preserving or degrading their global spectral characteristics. All participants relied on facial cues when the auditory stimuli were acoustically impoverished; however, only a subgroup of participants used angry faces to interpret subsequent neutral prosody. Thus, listeners are sensitive to facial cues for evaluating what they are about to hear, especially when the auditory input is less reliable. These results extend findings on face perception to the auditory domain and confirm inter-individual variability in considering different sources of emotional information.https://www.cambridge.org/core/product/identifier/S1866980824000036/type/journal_articlecross-modal affective primingemotional meaningfacial gesturesFrenchspoken prosody
spellingShingle Caterina Petrone
Francesca Carbone
Nicolas Audibert
Maud Champagne-Lavau
Facial cues to anger affect meaning interpretation of subsequent spoken prosody
Language and Cognition
cross-modal affective priming
emotional meaning
facial gestures
French
spoken prosody
title Facial cues to anger affect meaning interpretation of subsequent spoken prosody
title_full Facial cues to anger affect meaning interpretation of subsequent spoken prosody
title_fullStr Facial cues to anger affect meaning interpretation of subsequent spoken prosody
title_full_unstemmed Facial cues to anger affect meaning interpretation of subsequent spoken prosody
title_short Facial cues to anger affect meaning interpretation of subsequent spoken prosody
title_sort facial cues to anger affect meaning interpretation of subsequent spoken prosody
topic cross-modal affective priming
emotional meaning
facial gestures
French
spoken prosody
url https://www.cambridge.org/core/product/identifier/S1866980824000036/type/journal_article
work_keys_str_mv AT caterinapetrone facialcuestoangeraffectmeaninginterpretationofsubsequentspokenprosody
AT francescacarbone facialcuestoangeraffectmeaninginterpretationofsubsequentspokenprosody
AT nicolasaudibert facialcuestoangeraffectmeaninginterpretationofsubsequentspokenprosody
AT maudchampagnelavau facialcuestoangeraffectmeaninginterpretationofsubsequentspokenprosody