Cross-modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicality

In two experiments, English words and non-words varying in phonotactic probability were cross-modally compared in an AB matching task. Participants were presented with either visual-only (V) speech (a talker's speaking face) or auditory-only (A) speech (a talker's voice) in the A position....

Full description

Saved in:
Bibliographic Details
Main Author: Kauyumari Sanchez
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-02-01
Series:Frontiers in Language Sciences
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/flang.2025.1488399/full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1823856490612523008
author Kauyumari Sanchez
author_facet Kauyumari Sanchez
author_sort Kauyumari Sanchez
collection DOAJ
description In two experiments, English words and non-words varying in phonotactic probability were cross-modally compared in an AB matching task. Participants were presented with either visual-only (V) speech (a talker's speaking face) or auditory-only (A) speech (a talker's voice) in the A position. Stimuli in the B position were of the opposing modality (counterbalanced). Experiment 1 employed monosyllabic items, while experiment 2 employed bisyllabic items. Accuracy measures for experiment 1 revealed main effects for phonotactic probability and presentation order (A-V vs. V-A), while experiment 2 revealed main effects for lexicality and presentation order. Reaction time measures for experiment 1 revealed an interaction between probability and lexicality, with a main effect for presentation order. Reaction time measures for experiment 2 revealed two 2-way interactions: probability and lexicality and probability and presentation order, with significant main effects. Overall, the data suggests that (1) cross-modal research can be conducted with various presentation orders, (2) perception is guided by the most predictive components of a stimulus, and (3) more complex stimuli can support the results from experiments using simpler stimuli, but can also uncover new information.
format Article
id doaj-art-45b85c5f865b4d51a8dc095b09d09c35
institution Kabale University
issn 2813-4605
language English
publishDate 2025-02-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Language Sciences
spelling doaj-art-45b85c5f865b4d51a8dc095b09d09c352025-02-12T07:26:37ZengFrontiers Media S.A.Frontiers in Language Sciences2813-46052025-02-01410.3389/flang.2025.14883991488399Cross-modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicalityKauyumari SanchezIn two experiments, English words and non-words varying in phonotactic probability were cross-modally compared in an AB matching task. Participants were presented with either visual-only (V) speech (a talker's speaking face) or auditory-only (A) speech (a talker's voice) in the A position. Stimuli in the B position were of the opposing modality (counterbalanced). Experiment 1 employed monosyllabic items, while experiment 2 employed bisyllabic items. Accuracy measures for experiment 1 revealed main effects for phonotactic probability and presentation order (A-V vs. V-A), while experiment 2 revealed main effects for lexicality and presentation order. Reaction time measures for experiment 1 revealed an interaction between probability and lexicality, with a main effect for presentation order. Reaction time measures for experiment 2 revealed two 2-way interactions: probability and lexicality and probability and presentation order, with significant main effects. Overall, the data suggests that (1) cross-modal research can be conducted with various presentation orders, (2) perception is guided by the most predictive components of a stimulus, and (3) more complex stimuli can support the results from experiments using simpler stimuli, but can also uncover new information.https://www.frontiersin.org/articles/10.3389/flang.2025.1488399/fullaudio-visual speech perceptionmultisensory perceptioncross-modal speechpsycholinguisticslanguage processing
spellingShingle Kauyumari Sanchez
Cross-modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicality
Frontiers in Language Sciences
audio-visual speech perception
multisensory perception
cross-modal speech
psycholinguistics
language processing
title Cross-modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicality
title_full Cross-modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicality
title_fullStr Cross-modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicality
title_full_unstemmed Cross-modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicality
title_short Cross-modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicality
title_sort cross modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicality
topic audio-visual speech perception
multisensory perception
cross-modal speech
psycholinguistics
language processing
url https://www.frontiersin.org/articles/10.3389/flang.2025.1488399/full
work_keys_str_mv AT kauyumarisanchez crossmodalmatchingofmonosyllabicandbisyllabicitemsvaryinginphonotacticprobabilityandlexicality