Hearing faces and seeing accents?

2020 ◽  
Vol 46 (1) ◽  
pp. 1-20
Author(s):  
Chen-Wei Felix Yu

Abstract In this paper, the McGurk effect displayed by native Mandarin Speakers is examined in the light of reaction time (RT) and response types. Two within-subject factors, FACE and ACCENT, and one between-subject factor, English Proficiency, were incorporated in the experiment. The results showed that FACE and ACCENT, but not English Proficiency, had effects on the participants’ RT and response types. When a foreign ACCENT was dubbed onto a familiar FACE, the RT was the longest, and the McGurk effect was most likely to be found. Other kinds of McGurk stimuli composition did not receive different RT but induced different response types. When the FACE was foreign, regardless of the ACCENT, the participants tended to respond with perceptive illusion. The author concluded that the expectations of the perceiver influenced the use of multisensory integration and thus the longer RT and the appearance of the McGurk effect.

2021 ◽  
pp. 1-31
Author(s):  
Haoruo Zhang ◽  
Norbert Vanek

Abstract In response to negative yes–no questions (e.g., Doesn’t she like cats?), typical English answers (Yes, she does/No, she doesn’t) peculiarly vary from those in Mandarin (No, she does/Yes, she doesn’t). What are the processing consequences of these markedly different conventionalized linguistic responses to achieve the same communicative goals? And if English and Mandarin speakers process negative questions differently, to what extent does processing change in Mandarin–English sequential bilinguals? Two experiments addressed these questions. Mandarin–English bilinguals, English and Mandarin monolinguals (N = 40/group) were tested in a production experiment (Expt. 1). The task was to formulate answers to positive/negative yes–no questions. The same participants were also tested in a comprehension experiment (Expt. 2), in which they had to answer positive/negative questions with time-measured yes/no button presses. In both Expt. 1 and Expt. 2, English and Mandarin speakers showed language-specific yes/no answers to negative questions. Also, in both experiments, English speakers showed a reaction-time advantage over Mandarin speakers in negation conditions. Bilingual’s performance was in-between that of the L1 and L2 baseline. These findings are suggestive of language-specific processing of negative questions. They also signal that the ways in which bilinguals process negative questions are susceptible to restructuring driven by the second language.


2019 ◽  
Vol 9 (12) ◽  
pp. 362
Author(s):  
Antonia M. Karellas ◽  
Paul Yielder ◽  
James J. Burkitt ◽  
Heather S. McCracken ◽  
Bernadette A. Murphy

Multisensory integration (MSI) is necessary for the efficient execution of many everyday tasks. Alterations in sensorimotor integration (SMI) have been observed in individuals with subclinical neck pain (SCNP). Altered audiovisual MSI has previously been demonstrated in this population using performance measures, such as reaction time. However, neurophysiological techniques have not been combined with performance measures in the SCNP population to determine differences in neural processing that may contribute to these behavioral characteristics. Electroencephalography (EEG) event-related potentials (ERPs) have been successfully used in recent MSI studies to show differences in neural processing between different clinical populations. This study combined behavioral and ERP measures to characterize MSI differences between healthy and SCNP groups. EEG was recorded as 24 participants performed 8 blocks of a simple reaction time (RT) MSI task, with each block consisting of 34 auditory (A), visual (V), and audiovisual (AV) trials. Participants responded to the stimuli by pressing a response key. Both groups responded fastest to the AV condition. The healthy group demonstrated significantly faster RTs for the AV and V conditions. There were significant group differences in neural activity from 100–140 ms post-stimulus onset, with the control group demonstrating greater MSI. Differences in brain activity and RT between individuals with SCNP and a control group indicate neurophysiological alterations in how individuals with SCNP process audiovisual stimuli. This suggests that SCNP alters MSI. This study presents novel EEG findings that demonstrate MSI differences in a group of individuals with SCNP.


1970 ◽  
Vol 13 (1) ◽  
pp. 203-217 ◽  
Author(s):  
Isabelle Rapin ◽  
Peter Steinherz

A substantial part of reaction time (RT), the time elapsed between presentation of a stimulus and the subject’s response, reflects a central delay during which the brain processes the input and elaborates a response. Low stimulus intensity, inefficient central processing, and lack of motivation are among factors which prolong RT. RT was readily measured in 34 children, age 5½ and older, attending a school for the deaf. Rapid responses to light and light plus sound, and all responses to sound alone were rewarded. Four of twelve children initially unresponsive to sound learned to respond. When sound was attenuated, plots of RT gave information on the efficiency of responses to suprathreshold stimuli and warned that threshold was approaching 5–10 dB before it was reached. Such curves would increase the face validity of clinical audiometric threshold estimates. In severely deaf children, somatosensory stimulation by 500-Hz tones yielded RT curves and thresholds which were very similar to those obtained with aural presentation of the sound, casting doubt on the auditory origin of residual hearing in the low frequency range. Somatosensory stimulation by 1000- and 2000-Hz tones was rare.


Perception ◽  
2016 ◽  
Vol 46 (5) ◽  
pp. 624-631 ◽  
Author(s):  
Andreas M. Baranowski ◽  
H. Hecht

Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed “Kuleshov effect.” In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that the music significantly influenced participants’ emotional judgments of facial expression. Thus, the intersensory effects of music are more specific than previously thought. They alter the evaluation of film scenes and can give meaning to ambiguous situations.


2018 ◽  
Vol 71 (6) ◽  
pp. 1396-1404 ◽  
Author(s):  
Catherine Bortolon ◽  
Siméon Lorieux ◽  
Stéphane Raffard

Self-face recognition has been widely explored in the past few years. Nevertheless, the current literature relies on the use of standardized photographs which do not represent daily-life face recognition. Therefore, we aim for the first time to evaluate self-face processing in healthy individuals using natural/ambient images which contain variations in the environment and in the face itself. In total, 40 undergraduate and graduate students performed a forced delayed-matching task, including images of one’s own face, friend, famous and unknown individuals. For both reaction time and accuracy, results showed that participants were faster and more accurate when matching different images of their own face compared to both famous and unfamiliar faces. Nevertheless, no significant differences were found between self-face and friend-face and between friend-face and famous-face. They were also faster and more accurate when matching friend and famous faces compared to unfamiliar faces. Our results suggest that faster and more accurate responses to self-face might be better explained by a familiarity effect – that is, (1) the result of frequent exposition to one’s own image through mirror and photos, (2) a more robust mental representation of one’s own face and (3) strong face recognition units as for other familiar faces.


2005 ◽  
Vol 58 (7) ◽  
pp. 1325-1338 ◽  
Author(s):  
Andrea M. Philipp ◽  
Iring Koch

When participants perform a sequence of different tasks, it is assumed that the engagement in one task leads to the inhibition of the previous task. This inhibition persists and impairs performance when participants switch back to this (still inhibited) task after only one intermediate trial. Previous task-switching studies on this issue have defined different tasks at the level of stimulus categorization. In our experiments we used different response modalities to define tasks. Participants always used the same stimulus categorization (e.g., categorize a digit as odd vs. even), but had to give a vocal, finger, or foot response (A, B, or C). Our results showed a higher reaction time and error rate in ABA sequences than in CBA sequences, indicating n − 2 repetition cost as a marker for persisting task inhibition. We assume that different response modalities can define a task and are inhibited in a “task switch” in the same way as stimulus categories are inhibited.


Author(s):  
Elke B. Lange ◽  
Jens Fünderich ◽  
Hartmut Grimm

AbstractWe investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.


Sign in / Sign up

Export Citation Format

Share Document