Autistic adults anticipate and integrate meaning based on the speaker’s voice: Evidence from eye-tracking and event-related potentials
Typically developing (TD) individuals rapidly integrate information about a speaker and their intended meaning while processing sentences online. We examined whether the same processes are activated in autistic adults, and tested their timecourse in two pre-registered experiments. Experiment 1 employed the visual world paradigm. Participants listened to sentences where the speaker’s voice and message were either consistent or inconsistent (e.g. “When we go shopping, I usually look for my favourite wine”, spoken by an adult or a child), and concurrently viewed visual scenes including consistent and inconsistent objects (e.g. wine and sweets). All participants were slower to select the mentioned object in the inconsistent condition. Importantly, eye movements showed a visual bias towards the voice-consistent object, well before hearing the disambiguating word, showing that autistic adults rapidly use the speaker’s voice to anticipate the intended meaning. However, this target bias emerged earlier in the TD group compared to the autism group (2240ms vs 1800ms before disambiguation). Experiment 2 recorded ERPs to explore speaker-meaning integration processes. Participants listened to sentences as described above, and ERPs were time-locked to the onset of the target word. A control condition included a semantic anomaly. Results revealed an enhanced N400 for inconsistent speaker-meaning sentences that was comparable to that elicited by anomalous sentences, in both groups. Overall, contrary to research that has characterised autism in terms of a local processing bias and pragmatic dysfunction, autistic people were unimpaired at integrating multiple modalities of linguistic information, and were comparably sensitive to speaker-meaning inconsistency effects.