scholarly journals Stimulus Onset Modulates Auditory and Visual Dominance

Vision ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 14
Author(s):  
Margeaux Ciraolo ◽  
Samantha O’Hanlon ◽  
Christopher Robinson ◽  
Scott Sinnett

Investigations of multisensory integration have demonstrated that, under certain conditions, one modality is more likely to dominate the other. While the direction of this relationship typically favors the visual modality, the effect can be reversed to show auditory dominance under some conditions. The experiments presented here use an oddball detection paradigm with variable stimulus timings to test the hypothesis that a stimulus that is presented earlier will be processed first and therefore contribute to sensory dominance. Additionally, we compared two measures of sensory dominance (slowdown scores and error rate) to determine whether the type of measure used can affect which modality appears to dominate. When stimuli were presented asynchronously, analysis of slowdown scores and error rates yielded the same result; for both the 1- and 3-button versions of the task, participants were more likely to show auditory dominance when the auditory stimulus preceded the visual stimulus, whereas evidence for visual dominance was observed as the auditory stimulus was delayed. In contrast, for the simultaneous condition, slowdown scores indicated auditory dominance, whereas error rates indicated visual dominance. Overall, these results provide empirical support for the hypothesis that the modality that engages processing first is more likely to show dominance, and suggest that more explicit measures of sensory dominance may favor the visual modality.

2020 ◽  
Author(s):  
Christopher W Robinson

The current study used cross-modal oddball tasks to examine cardiac and behavioral responses to changing auditory and visual information. When instructed to press the same button for auditory and visual oddballs, auditory dominance was found with cross-modal presentation slowing down visual response times more than auditory response times (Experiment 1). When instructed to make separate responses to auditory and visual oddballs, visual dominance was found with cross-modal presentation decreasing auditory discrimination. Participants also made more visual-based than auditory-based errors on cross-modal trials (Experiment 2). Experiment 3 increased task demands while requiring a single button press and found evidence of auditory dominance, suggesting that it is unlikely that increased task demands can account for the reversal in Experiment 2. Examination of cardiac responses that were time-locked with stimulus onset showed cross-modal facilitation effects, with auditory and visual discrimination occurring earlier in the course of processing in the cross-modal condition than in the unimodal conditions. The current findings showing that response demand manipulations reversed modality dominance and that time-locked cardiac responses show cross-modal facilitation, not interference, suggest that auditory and visual dominance effects may both be occurring later in the course of processing, not from disrupted encoding.


2017 ◽  
Vol 35 (1) ◽  
pp. 77-93 ◽  
Author(s):  
Marilyn G. Boltz

Although the visual modality often dominates the auditory one, one exception occurs in the presence of tempo discrepancies between the two perceptual systems: variations in auditory rate typically have a greater influence on perceived visual rate than vice versa. This phenomenon, termed “auditory driving,” is investigated here through certain techniques used in cinematic art. Experiments 1 and 2 relied on montages (slideshows) of still photos accompanied by musical selections in which the perceived rate of one modality was assessed through a recognition task while the rate of the other modality was systematically varied. A similar methodological strategy was used in Experiments 3 and 4 in which film excerpts of various moving objects were accompanied by the sounds they typically produce. In both cases, auditory dominance was observed, which has implications at both a theoretical and applied level.


2016 ◽  
Vol 60 (1) ◽  
pp. 123-153 ◽  
Author(s):  
Rory Turnbull

Predictability is known to affect many properties of speech production. In particular, it has been observed that highly predictable elements (words, syllables) are produced with less phonetic prominence (shorter duration, less peripheral vowels) than less predictable elements. This tendency has been proposed to be a general property of language. This paper examines whether predictability is correlated with fundamental frequency (F0) production, through analysis of experimental corpora of American English. Predictability was variously defined as discourse mention, utterance probability, and semantic focus. The results revealed consistent effects of utterance probability and semantic focus on F0, in the expected direction: less predictable words were produced with a higher F0 than more predictable words. However, no effect of discourse mention was observed. These results provide further empirical support for the generalization that phonetic prominence is inversely related to linguistic predictability. In addition, the divergent results for different predictability measures suggests that the parameterization of predictability within a particular experimental design can have significant impact on the interpretation of results, and that it cannot be assumed that two measures necessarily reflect the same cognitive reality.


2020 ◽  
Author(s):  
Paddy Ross ◽  
Beth Atkins ◽  
Laura Allison ◽  
Holly Simpson ◽  
Catherine Duffell ◽  
...  

Effective emotion recognition is imperative to successfully navigating social situations. Research suggests differing developmental trajectories for the recognition of bodily and vocal emotion, but emotions are usually studied in isolation and rarely considered as multimodal stimuli in the literature. When presented with basic multimodal sensory stimuli, the Colavita effect suggests that adults have a visual dominance, whereas more recent research finds that an auditory sensory dominance may be present in children under 8 years of age. However, it is not currently known whether this phenomenon holds for more complex multimodal social stimuli. Here we presented children and adults with multimodal social stimuli consisting of emotional bodies and voices, asking them to recognise the emotion in one modality while ignoring the other. We found that adults can perform this task with no detrimental effects to performance, regardless of whether the ignored emotion was congruent or not. However, children find it extremely challenging to recognise bodily emotion while trying to ignore incongruent vocal emotional information. In several instances they perform below chance level, indicating that the auditory modality actively informs their choice of bodily emotion. This is therefore the first evidence, to our knowledge, of an auditory dominance in children when presented with emotionally meaningful stimuli.


Author(s):  
Justine Mertz ◽  
Chiara Annucci ◽  
Valentina Aristodemo ◽  
Beatrice Giustolisi ◽  
Doriane Gras ◽  
...  

The study of articulatory complexity has proven to yield useful insights into the phonological mechanisms of spoken languages. In sign languages, this type of knowledge is scarcely documented. The current study compares a data-driven measure and a theory-driven measure of complexity for signs in French Sign Language (LSF). The former measure is based on error rates of handshape, location, orientation, movement and sign fluidity in a repetition task administered to non-signers; the latter measure is derived by applying a feature-geometry model of sign description on the same set of signs. A significant correlation is found between the two measures for the overall complexity. When looking at the impact of individual phonemic classes on complexity, a significant correlation is found for handshape and location but not for movement. We discuss how these results indicate that a fine-grained theoretical model of sign phonology/phonetics reflects the degree of complexity as resulting from the perceptual and articulatory properties of signs.


2014 ◽  
Vol 36 (1) ◽  
pp. 43-57 ◽  
Author(s):  
Sidney Shaw ◽  
Kirsten Murray

The therapeutic alliance is foundational to counseling practice and has amassed strong empirical support as being essential for successful counseling. Counselors generally rely on their own perspective when assessing the quality of the alliance, though the client's perspective has been found to be a better predictor of outcome. Formal methods for eliciting client feedback about the alliance and counseling outcomes have been strongly supported in the literature, yet such limitations as time constraints hinder counselor efforts to gather formal client feedback. Two ultra-brief measures of alliance and outcome, the Session Rating Scale and the Outcome Rating Scale, are feasible methods for counselors to secure client feedback. This article reviews the two measures and makes a case for using empirical means to understand adult clients' views of the therapeutic alliance.


1998 ◽  
Vol 28 (5) ◽  
pp. 1091-1100 ◽  
Author(s):  
P. MARUFF ◽  
J. DANCKERT ◽  
C. PANTELIS ◽  
J. CURRIE

Background. Abnormal performance on the antisaccade task suggests that patients with schizophrenia have difficulty with the inhibition of reflexive attentional shifts. The aim of the study was to investigate whether deficits in the inhibition of reflexive attentional shifts were specific to the oculomotor modality or whether they could also occur when attentional shifts were made without eye movements (e.g. covert attentional shifts).Methods. Fifteen medicated patients with chronic schizophrenia and 15 matched controls performed the antisaccade task and the covert orientating task (COVAT) where the probability of targets appearing at the same location of a peripheral cue was varied so that voluntary and reflexive orientating systems had the same goal (80% probability of target and cued condition) or opposite goals (20%probability of target at cued location). A condition where only reflexive orientating was initiated was also included (50% probability of target at cued location). For each of these conditions the stimulus onset asynchrony (SOA) varied between 150 and 350 ms.Results. Patients with schizophrenia showed normal latency and accuracy for visually guided saccades but increased error rates and latency on the antisaccade task. For the COVAT, patients with schizophrenia were unable to use voluntary orientating strategies to inhibit reflexive shifts of covert attention. On conditions where only reflexive orientating was required or when the goals of the reflexive and voluntary orientating systems were the same, patients with schizophrenia showed normal performance.Conclusions. These results suggest the reflexive orientating mode is normal in patients with chronic schizophrenia. However, these patients have a reduced ability to utilize the voluntary orientating mode to control or inhibit reflexive orientating. This impairment of voluntary control is evident for both overt and covert attentional shifts.


2013 ◽  
Vol 25 (9) ◽  
pp. 1553-1562 ◽  
Author(s):  
Merav Sabri ◽  
Colin Humphries ◽  
Matthew Verber ◽  
Jain Mangalathu ◽  
Anjali Desai ◽  
...  

In the visual modality, perceptual demand on a goal-directed task has been shown to modulate the extent to which irrelevant information can be disregarded at a sensory-perceptual stage of processing. In the auditory modality, the effect of perceptual demand on neural representations of task-irrelevant sounds is unclear. We compared simultaneous ERPs and fMRI responses associated with task-irrelevant sounds across parametrically modulated perceptual task demands in a dichotic-listening paradigm. Participants performed a signal detection task in one ear (Attend ear) while ignoring task-irrelevant syllable sounds in the other ear (Ignore ear). Results revealed modulation of syllable processing by auditory perceptual demand in an ROI in middle left superior temporal gyrus and in negative ERP activity 130–230 msec post stimulus onset. Increasing the perceptual demand in the Attend ear was associated with a reduced neural response in both fMRI and ERP to task-irrelevant sounds. These findings are in support of a selection model whereby ongoing perceptual demands modulate task-irrelevant sound processing in auditory cortex.


2005 ◽  
Vol 17 (7) ◽  
pp. 1098-1114 ◽  
Author(s):  
Durk Talsma ◽  
Marty G. Woldorff

We used event-related potentials (ERPs) to evaluate the role of attention in the integration of visual and auditory features of multisensory objects. This was done by contrasting the ERPs to multisensory stimuli (AV) to the sum of the ERPs to the corresponding auditory-only (A) and visual-only (V) stimuli [i.e., AV vs. (A + V)]. V, A, and VA stimuli were presented in random order to the left and right hemispaces. Subjects attended to a designated side to detect infrequent target stimuli in either modality there. The focus of this report is on the ERPs to the standard (i.e., nontarget) stimuli. We used rapid variable stimulus onset asynchronies (350-650 msec) to mitigate anticipatory activity and included “no-stim” trials to estimate and remove ERP overlap from residual anticipatory processes and from adjacent stimuli in the sequence. Spatial attention effects on the processing of the unisensory stimuli consisted of a modulation of visual P1 and N1 components (at 90-130 msec and 160-200 msec, respectively) and of the auditory N1 and processing negativity (100-200 msec). Attended versus unattended multisensory ERPs elicited a combination of these effects. Multisensory integration effects consisted of an initial frontal positivity around 100 msec that was larger for attended stimuli. This was followed by three phases of centro-medially distributed effects of integration and/or attention beginning at around 160 msec, and peaking at 190 (scalp positivity), 250 (negativity), and 300-500 msec (positivity) after stimulus onset. These integration effects were larger in amplitude for attended than for unattended stimuli, providing neural evidence that attention can modulate multisensory-integration processes at multiple stages.


Sign in / Sign up

Export Citation Format

Share Document