A Meta-Analysis of Infants’ Ability to Perceive Audio-Visual Congruence for Speech Sounds
This paper investigates the extent to which infants can integrate synchronous speech information across different modalities. A meta-analysis of 24 studies reporting 92 separate effect size measures suggests that infants possess a robust ability to perceive audio-visual congruence for speech sounds. Applying a hierarchical Bayesian robust regression model to the data indicates a moderate effect size in a positive direction (0.35, CI [0.21: 0.50]). Moderator analyses suggest that infants’ audio-visual matching ability for speech sounds emerges at an early point in process of language acquisition and remains stable for both native and non-native speech throughout early development. A sensitivity analysis of the meta-analytic data indicates that a moderate publication bias for significant results could shift the lower credible interval to include null effects. Based on these findings, we outline recommendations for new lines of enquiry and suggest ways to improve the replicability of results in future investigations.