Tones disrupt visual fixations and responding on a visual-spatial task

2020 ◽  
Author(s):  
Christopher W Robinson

The current study used an eye tracker to examine how auditory input affects the latency of visual saccades, fixations, and response times while using variations of a Serial Response Time (SRT) task. In Experiment 1, participants viewed a repeating sequence of visual stimuli that appeared in different locations on a computer monitor and they had to quickly determine if each visual stimulus was red or blue. The visual sequence was either presented in silence or paired with tones. Compared to the silent condition, the tones slowed down red/blue discriminations and delayed the latency of first fixations to the visual stimuli. To ensure the interference was not occurring during the decision/response phase and to better understand the nature of auditory interference, we removed the red/blue discrimination task in Experiment 2, manipulated cognitive load, and developed a gaze-contingent procedure where the timing of each visual stimulus was dependent on a saccade crossing a gaze-contingent boundary surrounding the target. Participants were slower at initiating their saccades/fixations and made more fixations under high load and auditory interference was found with participants being more likely to fixate on the visual images and were faster at fixating on the visual stimuli when the visual sequences were presented in silence. These findings suggest that auditory interference effects occur early in the course of processing and provide insights into potential mechanisms underlying modality dominance effects.

2020 ◽  
Vol 46 (11) ◽  
pp. 1301-1312
Author(s):  
Dylan Laughery ◽  
Noah Pesina ◽  
Christopher W. Robinson

2020 ◽  
Author(s):  
Christopher W Robinson

The current study examined how simple tones affect speeded visual responses in a visual-spatial sequence learning task. Across the three reported experiments, participants were presented with a visual target that appeared in different locations on a touchscreen monitor and they were instructed to touch the visual targets as quickly as possible. Response times typically sped up across training and participants were slower to respond to the visual stimuli when the sequences were paired with tones. Moreover, these interference effects were more pronounced early in training and explicit instructions directing attention to the visual modality had little effect on eliminating auditory interference, suggesting that these interference effects may stem from bottom-up factors and do not appear to be under attentional control. These findings have implications on tasks that require the processing of simultaneously presented auditory and visual information and provide support for a proposed mechanism underlying auditory dominance on a task that is typically better suited for the visual modality.


2014 ◽  
Vol 27 (2) ◽  
pp. 139-160 ◽  
Author(s):  
Pia Ley ◽  
Brigitte Röder

The present study investigated whether effects of movement preparation and visual spatial attention on visual processing can be dissociated. Movement preparation and visual spatial attention were manipulated orthogonally in a dual-task design. Ten participants covertly prepared unimanual lateral arm movements to one hemifield, while attending to visual stimuli presented either in the same or in the hemifield opposite to the movement goal. Event-related potentials to task-irrelevant visual stimuli were analysed. Both joint and distinct modulations of visual ERPs by visual spatial attention and movement preparation were observed: The latencies of all analysed peaks (P1, N1, P2) were shorter for matching (in terms of direction of attention and movement) versus non-matching sensory–motor conditions. The P1 amplitude, as well, depended on the sensory–motor matching: The P1 was larger for non-matching compared to matching conditions. By contrast, the N1 amplitude showed additive effects of sensory attention and movement preparation: with attention and movement preparation directed towards the visual stimulus the N1 was largest, with both directed opposite to the stimulus the N1 was smallest. P2 amplitudes, instead, were only modulated by sensory attention. The present data show that movement preparation and sensory spatial attention are tightly linked and interrelated, showing joint modulations throughout stimulus processing. At the same time, however, our data argue against the idea of identity of the two systems. Instead, sensory spatial attention and movement preparation seem to be processed at least partially independently, though still exerting a combined influence on visual stimulus processing.


2011 ◽  
Author(s):  
Logan Kaleta ◽  
David E. Ritchie ◽  
Scott Leydig ◽  
Susana Quintana Marikle ◽  
Stephen A. Russo

1992 ◽  
Vol 67 (6) ◽  
pp. 1447-1463 ◽  
Author(s):  
K. Nakamura ◽  
A. Mikami ◽  
K. Kubota

1. The activity of single neurons was recorded extracellularly from the monkey amygdala while monkeys performed a visual discrimination task. The monkeys were trained to remember a visual stimulus during a delay period (0.5-3.0 s), to discriminate a new visual stimulus from the stimulus, and to release a lever when the new stimulus was presented. Colored photographs (human faces, monkeys, foods, and nonfood objects) or computer-generated two-dimensional shapes (a yellow triangle, a red circle, etc.) were used as visual stimuli. 2. The activity of 160 task-related neurons was studied. Of these, 144 (90%) responded to visual stimuli, 13 (8%) showed firing during the delay period, and 9 (6%) responded to the reward. 3. Task-related neurons were categorized according to the way in which various stimuli activated the neurons. First, to evaluate the proportion of all tested stimuli that elicited changes in activity of a neuron, selectivity index 1 (SI1) was employed. Second, to evaluate the ability of a neuron to discriminate a stimulus from another stimulus, SI2 was employed. On the basis of the calculated values of SI1 and SI2, neurons were classified as selective and nonselective. Most visual neurons were categorized as selective (131/144), and a few were characterized as nonselective (13/144). Neurons active during the delay period were also categorized as selective visual and delay neurons (6/13) and as nonselective delay neurons (7/13). 4. Responses of selective visual neurons had various temporal and stimulus-selective properties. Latencies ranged widely from 60 to 300 ms. Response durations also ranged widely from 20 to 870 ms. When the natures of the various effective stimuli were studied for each neuron, one-fourth of the responses of these neurons were considered to reflect some categorical aspect of the stimuli, such as human, monkey, food, or nonfood object. Furthermore, the responses of some neurons apparently reflected a certain behavioral significance of the stimuli that was separate from the task, such as the face of a particular person, smiling human faces, etc. 5. Nonselective visual neurons responded to a visual stimulus, regardless of its nature. They also responded in the absence of a visual stimulus when the monkey anticipated the appearance of the next stimulus. 6. Selective visual and delay neurons fired in response to particular stimuli and throughout the subsequent delay periods. Nonselective delay neurons increased their discharge rates gradually during the delay period, and the discharge rate decreased after the next stimulus was presented. 7. Task-related neurons were identified in six histologically distinct nuclei of the amygdala.(ABSTRACT TRUNCATED AT 400 WORDS)


1995 ◽  
Vol 12 (4) ◽  
pp. 723-741 ◽  
Author(s):  
W. Guido ◽  
S.-M. Lu ◽  
J.W. Vaughan ◽  
Dwayne W. Godwin ◽  
S. Murray Sherman

AbstractRelay cells of the lateral geniculate nucleus respond to visual stimuli in one of two modes: burst and tonic. The burst mode depends on the activation of a voltage-dependent, Ca2+ conductance underlying the low threshold spike. This conductance is inactivated at depolarized membrane potentials, but when activated from hyperpolarized levels, it leads to a large, triangular, nearly all-or-none depolarization. Typically, riding its crest is a high-frequency barrage of action potentials. Low threshold spikes thus provide a nonlinear amplification allowing hyperpolarized relay neurons to respond to depolarizing inputs, including retinal EPSPs. In contrast, the tonic mode is characterized by a steady stream of unitary action potentials that more linearly reflects the visual stimulus. In this study, we tested possible differences in detection between response modes of 103 geniculate neurons by constructing receiver operating characteristic (ROC) curves for responses to visual stimuli (drifting sine-wave gratings and flashing spots). Detectability was determined from the ROC curves by computing the area under each curve, known as the ROC area. Most cells switched between modes during recording, evidently due to small shifts in membrane potential that affected the activation state of the low threshold spike. We found that the more often a cell responded in burst mode, the larger its ROC area. This was true for responses to optimal and nonoptimal visual stimuli, the latter including nonoptimal spatial frequencies and low stimulus contrasts. The larger ROC areas associated with burst mode were due to a reduced spontaneous activity and roughly equivalent level of visually evoked response when compared to tonic mode. We performed a within-cell analysis on a subset of 22 cells that switched modes during recording. Every cell, whether tested with a low contrast or high contrast visual stimulus exhibited a larger ROC area during its burst response mode than during its tonic mode. We conclude that burst responses better support signal detection than do tonic responses. Thus, burst responses, while less linear and perhaps less useful in providing a detailed analysis of visual stimuli, improve target detection. The tonic mode, with its more linear response, seems better suited for signal analysis rather than signal detection.


1996 ◽  
Vol 76 (3) ◽  
pp. 1439-1456 ◽  
Author(s):  
P. Mazzoni ◽  
R. M. Bracewell ◽  
S. Barash ◽  
R. A. Andersen

1. The lateral intraparietal area (area LIP) of the monkey's posterior parietal cortex (PPC) contains neurons that are active during saccadic eye movements. These neurons' activity includes visual and saccade-related components. These responses are spatially tuned and the location of a neuron's visual receptive field (RF) relative to the fovea generally overlaps its preferred saccade amplitude and direction (i.e., its motor field, MF). When a delay is imposed between the presentation of a visual stimulus and a saccade made to its location (memory saccade task), many LIP neurons maintain elevated activity during the delay (memory activity, M), which appears to encode the metrics of the next intended saccadic eye movements. Recent studies have alternatively suggested that LIP neurons encode the locations of visual stimuli regardless of where the animal intends to look. We examined whether the M activity of LIP neurons specifically encodes movement intention or the locations of recent visual stimuli, or a combination of both. In the accompanying study, we investigated whether the intended-movement activity reflects changes in motor plan. 2. We trained monkeys (Macaca mulatta) to memorize the locations of two visual stimuli and plan a sequence of two saccades, one to each remembered target, as we recorded the activity of single LIP neurons. Two targets were flashed briefly while the monkey maintained fixation; after a delay the fixation point was extinguished, and the monkey made two saccades in sequence to each target's remembered location, in the order in which the targets were presented. This "delayed double saccade" (DDS) paradigm allowed us to dissociate the location of visual stimulation from the direction of the planned saccade and thus distinguish neuronal activity related to the target's location from activity related to the saccade plan. By imposing a delay, we eliminated the confounding effect of any phasic responses coincident with the appearance of the stimulus and with the saccade. 3. We arranged the two visual stimuli so that in one set of conditions at least the first one was in the neuron's visual RF, and thus the first saccade was in the neuron's motor field (MF). M activity should be high in these conditions according to both the sensory memory and motor plan hypotheses. In another set of conditions, the second stimulus appeared in the RF but the first one was presented outside the RF, instructing the monkey to plan the first saccade away from the neuron's MF. If the M activity encodes the motor plan, it should be low in these conditions, reflecting the plan for the first saccade (away from the MF). If it is a sensory trace of the stimulus' location, it should be high, reflecting stimulation of the RF by the second target. 4. We tested 49 LIP neurons (in 3 hemispheres of 2 monkeys) with M activity on the DDS task. Of these, 38 (77%) had M activity related to the next intended saccade. They were active in the delay period, as expected, if the first saccade was in their preferred direction. They were less active or silent if the next saccade was not in their preferred direction, even when the second stimulus appeared in their RF. 5. The M activity of 8 (16%) of the remaining neurons specifically encoded the location of the most recent visual stimulus. Their firing rate during the delay reflected stimulation of the RF independently of the saccade being planned. The remaining 3 neurons had M activity that did not consistently encode either the next saccade or the stimulus' location. 6. We also recorded the activity of a subset of neurons (n = 38) in a condition in which no stimulus appeared in a neuron's RF, but the second saccade was in the neuron's MF. In this case the majority of neurons tested (23/38, 60%) became active in the period between the first and second saccade, even if neither stimulus had appeared in their RF. Moreover, this activity appeared only after the first saccade had started in all but two of


Politics ◽  
2017 ◽  
Vol 38 (2) ◽  
pp. 232-249 ◽  
Author(s):  
David Roberts

Globalization and digitization have combined to create a ‘pictorial turn’ that has transformed communication landscapes. Routine exposure to visual stimuli like images has acculturated our students’ learning processes long before their arrival at university. But when they reach us, we expose them to text-centric teaching out of kilter with the worlds from which they come. More importantly, emerging scholarship argues that such textual hegemony is out of kilter with how they learn. This article describes a 3-year experiment to assess the veracity of such claims. It found that student academic engagement was greater when apposite images were applied. In addition, the experiment revealed that introducing imagery triggered active learning behaviours. The article concludes with a discussion of the implications of these findings for politics and international relations teaching.


1989 ◽  
Vol 13 (2) ◽  
pp. 191-203 ◽  
Author(s):  
Shelagh A. Gallagher

A regression analysis was conducted to determine the relative importance of a series of variables in the prediction of SAT-Mathematics (SAT-M) scores of gifted males and females. Among the variables considered were visual-spatial ability, cognitive reasoning ability, learning style, and SAT-Verbal (SAT-V) scores. Scores on the visual-spatial task were analyzed for speed of response as well as ability. For both sexes, reasoning skills were the predominant factor in the prediction formulas. Differences in the two formulas seemed to reflect males' greater facility with process skills necessary for the SAT-M. Implications are discussed regarding how to interpret the differential performance of gifted males and females on the SAT-M.


2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.


Sign in / Sign up

Export Citation Format

Share Document