Development of Infant Reaching in the Dark to Luminous Objects and ‘Invisible Sounds’

Perception ◽  
1989 ◽  
Vol 18 (1) ◽  
pp. 69-82 ◽  
Author(s):  
Dale M Stack ◽  
Darwin W Muir ◽  
Frances Sherriff ◽  
Jeanne Roman

Two studies were conducted to investigate the existence of an unusual U-shaped developmental function described by Wishart et al (1978) for human infants reaching towards invisible sounds. In study 1, 2–7 month olds were presented with four conditions: (i) an invisible auditory stimulus alone, (ii) a glowing visual stimulus alone, (iii) auditory and visual stimuli on the same side (ie combined), and (iv) auditory and visual stimuli on opposite sides (ie in conflict). Study 2 was designed to examine the effects of practice and possible associations made when using the ‘combined conflict’ paradigm. Infants of 5 and 7 months of age were given five trials with the auditory stimulus, with or without prior visual experience, and five trials with the visual stimulus, with the position of the stimulus varied on each trial. Stimuli were presented individually at the midline, and ±30 and ±60° from the midline. In both studies testing was conducted in complete darkness. Results indicated that the auditory-alone condition was slower to elicit a reach from the infants, relative to the visual-alone one, and reaches were least frequent to the auditory target. No U-shaped function was obtained, and reaching for auditory targets occurred later in age than for visual targets, but even at 7 months of age did not occur as often and was achieved by fewer infants. In both studies the quality of the reach was significantly poorer to auditory than to visual targets, but there were some accurate reaches. This research adds to our understanding of the development of auditory — manual coordination in sighted infants and is relevant to theories of auditory localization, visually guided reaching, and programming for the blind.

1994 ◽  
Vol 71 (3) ◽  
pp. 1250-1253 ◽  
Author(s):  
G. S. Russo ◽  
C. J. Bruce

1. We studied neuronal activity in the monkey's frontal eye field (FEF) in conjunction with saccades directed to auditory targets. 2. All FEF neurons with movement activity preceding saccades to visual targets also were active preceding saccades to auditory targets, even when such saccades were made in the dark. Movement cells generally had comparable bursts for aurally and visually guided saccades; visuomovement cells often had weaker bursts in conjunction with aurally guided saccades. 3. When these cells were tested from different initial fixation directions, movement fields associated with aurally guided saccades, like fields mapped with visual targets, were a function of saccade dimensions, and not the speaker's spatial location. Thus, even though sound location cues are chiefly craniotopic, the crucial factor for a FEF discharge before aurally guided saccades was the location of auditory target relative to the current direction of gaze. 4. Intracortical microstimulation at the sites of these cells evoked constant-vector saccades, and not goal-directed saccades. The direction and size of electrically elicited saccades generally matched the cell's movement field for aurally guided saccades. 5. Thus FEF activity appears to have a role in aurally guided as well as visually guided saccades. Moreover, visual and auditory target representations, although initially obtained in different coordinate systems, appear to converge to a common movement vector representation at the FEF stage of saccadic processing that is appropriate for transmittal to saccade-related burst neurons in the superior colliculus and pons.


2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.


Perception ◽  
10.1068/p5035 ◽  
2003 ◽  
Vol 32 (11) ◽  
pp. 1393-1402 ◽  
Author(s):  
Robert P Carlyon ◽  
Christopher J Plack ◽  
Deborah A Fantini ◽  
Rhodri Cusack

Carlyon et al (2001 Journal of Experimental Psychology: Human Perception and Performance27 115–127) have reported that the buildup of auditory streaming is reduced when attention is diverted to a competing auditory stimulus. Here, we demonstrate that a reduction in streaming can also be obtained by attention to a visual task or by the requirement to count backwards in threes. In all conditions participants heard a 13 s sequence of tones, and, during the first 10 s saw a sequence of visual stimuli containing three, four, or five targets. The tone sequence consisted of twenty repeating triplets in an ABA–ABA … order, where A and B represent tones of two different frequencies. In each sequence, three, four, or five tones were amplitude modulated. During the first 10 s of the sequence, participants either counted the number of visual targets, counted the number of (modulated) auditory targets, or counted backwards in threes from a specified number. They then made an auditory-streaming judgment about the last 3 s of the tone sequence: whether one or two streams were heard. The results showed more streaming when participants counted the auditory targets (and hence were attending to the tones throughout) than in either the ‘visual’ or ‘counting-backwards’ conditions.


2007 ◽  
Vol 98 (4) ◽  
pp. 2399-2413 ◽  
Author(s):  
Vivian M. Ciaramitaro ◽  
Giedrius T. Buračas ◽  
Geoffrey M. Boynton

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.


2012 ◽  
Vol 25 (0) ◽  
pp. 24
Author(s):  
Roberto Cecere ◽  
Benjamin De Haas ◽  
Harriett Cullen ◽  
Jon Driver ◽  
Vincenzo Romei

There is converging evidence that the duration of an auditory event can affect the perceived duration of a co-occurring visual event. When a brief visual stimulus is accompanied by a longer auditory stimulus, the perceived visual duration stretches. If this reflects a genuine sustain of visual stimulus perception, it should result in enhanced perception of non-temporal visual stimulus qualities. To test this hypothesis, in a temporal two-alternative forced choice task, 28 participants were asked to indicate whether a short (∼24 ms), peri-threshold, visual stimulus was presented in the first or in the second of two consecutive displays. Each display was accompanied by a sound of equal or longer duration (36, 48, 60, 72, 84, 96, 190 ms) than the visual stimulus. As a control condition, visual stimuli of different durations (matching auditory stimulus durations) were presented alone. We predicted that visual detection can improve as a function of sound duration. Moreover, if the expected cross-modal effect reflects sustained visual perception it should positively correlate with the improvement observed for genuinely longer visual stimuli. Results showed that detection sensitivity (d′) for the 24 ms visual stimulus was significantly enhanced when paired with longer auditory stimuli ranging from 60 to 96 ms duration. The visual detection performance dropped to baseline levels with 190 ms sounds. Crucially, the enhancement for auditory durations 60–96 ms significantly correlates with the d′ enhancement for visual stimuli lasting 60–96 ms in the control condition. We conclude that the duration of co-occurring auditory stimuli not only influences the perceived duration of visual stimuli but reflects a genuine sustain in visual perception.


1958 ◽  
Vol 104 (437) ◽  
pp. 1160-1164 ◽  
Author(s):  
P. H. Venables ◽  
J. Tizard

Two earlier studies (Venables and Tizard, 1956a, b) on the reaction time (RT) of schizophrenics have shown that as the intensity of a visual stimulus is increased beyond an optimum point, RT to the stimulus increases. This “paradoxical” increase in RT is not shown by normal subjects, whose RT decreases as the intensity of visual stimulus increases. It was also found that the paradoxical phenomenon with visual stimuli was only shown on an initial occasion of testing. When the experiment was repeated twenty-four hours later, although there was no alteration in the mean level of RT, the pattern of increase in RT with increasing intensity, previously found, was absent.


2019 ◽  
Author(s):  
Apdullah Yayık ◽  
Yakup Kutlu ◽  
Gökhan Altan

AbstractBackground and ObjectivesBrain-computer interfaces (BCIs) aim to provide neuroscientific communication platform for human-beings, in particular locked-in patients. In most cases event-related potentials (ERPs), averaged voltage responses to a specific target stimuli over time, have key roles in designing BCIs. With this reason, for the last several decades BCI researchers heavily have focused on signal processing methods to improve quality of ERPs. However, designing visual stimulus with considering their physical properties with regard to rapid and also reliable machine learning algorithms for BCIs remain relatively unexplored. Addressing the issues explained above, in summary the main contributions of this study are as follows: (1) optimizing visual stimulus in terms of size, color and background and, (2) to enhance learning capacity of conventional extreme learning machine (ELM) using advanced linear algebra techniques.MethodsTwo different sized (small and big), three different colored (blue, red and colorful) images with four different backgrounds (white, black and concentric) for each of them were designed and utilized as single object paradigm. Hessenberg decomposition method was proposed for learning process and compared with conventional ELM and multi-layer perceptron in terms of training duration and performance measures.ResultsPerformance measures of small colorful images with orange-concentric background were statistically higher than those of others. Visual stimulus with white background led to relatively higher performance measures than those with black background. Blue colored images had much more impact on improvement of P300 waves than red colored ones had. Hessenberg decomposition method provided 1.5 times shortened training duration than conventional ELM, in addition with comparable performance measures.ConclusionsHerein, a visual stimuli model based on improving quality of ERP responses and machine learning algorithm relies on hessenberg decomposition method are introduced with demonstration of their advantages in the context of BCI. Methods and findings described in this study may pave the way for widespread applications, particularly in clinical health-informatics.


1992 ◽  
Vol 67 (6) ◽  
pp. 1447-1463 ◽  
Author(s):  
K. Nakamura ◽  
A. Mikami ◽  
K. Kubota

1. The activity of single neurons was recorded extracellularly from the monkey amygdala while monkeys performed a visual discrimination task. The monkeys were trained to remember a visual stimulus during a delay period (0.5-3.0 s), to discriminate a new visual stimulus from the stimulus, and to release a lever when the new stimulus was presented. Colored photographs (human faces, monkeys, foods, and nonfood objects) or computer-generated two-dimensional shapes (a yellow triangle, a red circle, etc.) were used as visual stimuli. 2. The activity of 160 task-related neurons was studied. Of these, 144 (90%) responded to visual stimuli, 13 (8%) showed firing during the delay period, and 9 (6%) responded to the reward. 3. Task-related neurons were categorized according to the way in which various stimuli activated the neurons. First, to evaluate the proportion of all tested stimuli that elicited changes in activity of a neuron, selectivity index 1 (SI1) was employed. Second, to evaluate the ability of a neuron to discriminate a stimulus from another stimulus, SI2 was employed. On the basis of the calculated values of SI1 and SI2, neurons were classified as selective and nonselective. Most visual neurons were categorized as selective (131/144), and a few were characterized as nonselective (13/144). Neurons active during the delay period were also categorized as selective visual and delay neurons (6/13) and as nonselective delay neurons (7/13). 4. Responses of selective visual neurons had various temporal and stimulus-selective properties. Latencies ranged widely from 60 to 300 ms. Response durations also ranged widely from 20 to 870 ms. When the natures of the various effective stimuli were studied for each neuron, one-fourth of the responses of these neurons were considered to reflect some categorical aspect of the stimuli, such as human, monkey, food, or nonfood object. Furthermore, the responses of some neurons apparently reflected a certain behavioral significance of the stimuli that was separate from the task, such as the face of a particular person, smiling human faces, etc. 5. Nonselective visual neurons responded to a visual stimulus, regardless of its nature. They also responded in the absence of a visual stimulus when the monkey anticipated the appearance of the next stimulus. 6. Selective visual and delay neurons fired in response to particular stimuli and throughout the subsequent delay periods. Nonselective delay neurons increased their discharge rates gradually during the delay period, and the discharge rate decreased after the next stimulus was presented. 7. Task-related neurons were identified in six histologically distinct nuclei of the amygdala.(ABSTRACT TRUNCATED AT 400 WORDS)


1995 ◽  
Vol 12 (4) ◽  
pp. 723-741 ◽  
Author(s):  
W. Guido ◽  
S.-M. Lu ◽  
J.W. Vaughan ◽  
Dwayne W. Godwin ◽  
S. Murray Sherman

AbstractRelay cells of the lateral geniculate nucleus respond to visual stimuli in one of two modes: burst and tonic. The burst mode depends on the activation of a voltage-dependent, Ca2+ conductance underlying the low threshold spike. This conductance is inactivated at depolarized membrane potentials, but when activated from hyperpolarized levels, it leads to a large, triangular, nearly all-or-none depolarization. Typically, riding its crest is a high-frequency barrage of action potentials. Low threshold spikes thus provide a nonlinear amplification allowing hyperpolarized relay neurons to respond to depolarizing inputs, including retinal EPSPs. In contrast, the tonic mode is characterized by a steady stream of unitary action potentials that more linearly reflects the visual stimulus. In this study, we tested possible differences in detection between response modes of 103 geniculate neurons by constructing receiver operating characteristic (ROC) curves for responses to visual stimuli (drifting sine-wave gratings and flashing spots). Detectability was determined from the ROC curves by computing the area under each curve, known as the ROC area. Most cells switched between modes during recording, evidently due to small shifts in membrane potential that affected the activation state of the low threshold spike. We found that the more often a cell responded in burst mode, the larger its ROC area. This was true for responses to optimal and nonoptimal visual stimuli, the latter including nonoptimal spatial frequencies and low stimulus contrasts. The larger ROC areas associated with burst mode were due to a reduced spontaneous activity and roughly equivalent level of visually evoked response when compared to tonic mode. We performed a within-cell analysis on a subset of 22 cells that switched modes during recording. Every cell, whether tested with a low contrast or high contrast visual stimulus exhibited a larger ROC area during its burst response mode than during its tonic mode. We conclude that burst responses better support signal detection than do tonic responses. Thus, burst responses, while less linear and perhaps less useful in providing a detailed analysis of visual stimuli, improve target detection. The tonic mode, with its more linear response, seems better suited for signal analysis rather than signal detection.


Sign in / Sign up

Export Citation Format

Share Document