Visual signals used in time-interval discrimination

2000 ◽  
Vol 17 (4) ◽  
pp. 551-556 ◽  
Author(s):  
GERALD WESTHEIMER

Thresholds for the detection of differences in the duration of visual stimuli were determined for a variety of programs of stimulus onset and offset. Performance suffers when a time interval begins with an ON step and ends with another ON stimulus, compared to the standard ON–OFF stimulation, but the decrement is reversed when the light is ramped down to background during the interval. Neither the magnocellular nor the parvocellular streams can be excluded because there is relatively little impairment of duration discrimination when the stimulus has low contrast or is heterochromatic at isoluminance. Performance at a variety of intensity levels suggests that sustained neural firing in an early stage of visual processing provides a background activity, which prevents good temporal precision of signals.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Atsushi Chiba ◽  
Kazunori Morita ◽  
Ken-ichi Oshio ◽  
Masahiko Inase

AbstractTo investigate neuronal processing involved in the integration of auditory and visual signals for time perception, we examined neuronal activity in prefrontal cortex (PFC) of macaque monkeys during a duration discrimination task with auditory and visual cues. In the task, two cues were consecutively presented for different durations between 0.2 and 1.8 s. Each cue was either auditory or visual and was followed by a delay period. After the second delay, subjects indicated whether the first or the second cue was longer. Cue- and delay-responsive neurons were found in PFC. Cue-responsive neurons mostly responded to either the auditory or the visual cue, and to either the first or the second cue. The neurons responsive to the first delay showed activity that changed depending on the first cue duration and were mostly sensitive to cue modality. The neurons responsive to the second delay exhibited activity that represented which cue, the first or second cue, was presented longer. Nearly half of this activity representing order-based duration was sensitive to cue modality. These results suggest that temporal information with visual and auditory signals was separately processed in PFC in the early stage of duration discrimination and integrated for the final decision.


2020 ◽  
Author(s):  
Joshua J. Foster ◽  
William Thyer ◽  
Janna W. Wennberg ◽  
Edward Ahw

AbstractCovert spatial attention has a variety of effects on the responses of individual neurons. However, relatively little is known about the net effect of these changes on sensory population codes, even though perception ultimately depends on population activity. Here, we measured the electroencephalogram (EEG) in human observers (male and female), and isolated stimulus-evoked activity that was phase-locked to the onset of attended and ignored visual stimuli. Using an encoding model, we reconstructed spatially selective population tuning functions from the pattern of stimulus-evoked activity across the scalp. Our EEG-based approach allowed us to measure very early visually evoked responses occurring ~100 ms after stimulus onset. In Experiment 1, we found that covert attention increased the amplitude of spatially tuned population responses at this early stage of sensory processing. In Experiment 2, we parametrically varied stimulus contrast to test how this effect scaled with stimulus contrast. We found that the effect of attention on the amplitude of spatially tuned responses increased with stimulus contrast, and was well-described by an increase in response gain (i.e., a multiplicative scaling of the population response). Together, our results show that attention increases the gain of spatial population codes during the first wave of visual processing.Significance StatementWe know relatively little about how attention improves population codes, even though perception is thought to critically depend on population activity. In this study, we used an encoding-model approach to test how attention modulates the spatial tuning of stimulus-evoked population responses measured with EEG. We found that attention multiplicatively scales the amplitude of spatially tuned population responses. Furthermore, this effect was present within 100 ms of stimulus onset. Thus, our results show that attention improves spatial population codes by increasing their gain at this early stage of processing.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Daphné Rimsky-Robert ◽  
Viola Störmer ◽  
Jérôme Sackur ◽  
Claire Sergent

AbstractRecent studies have demonstrated that visually cueing attention towards a stimulus location after its disappearance can facilitate visual processing of the target and increase task performance. Here, we tested whether such retro-cueing effects can also occur across different sensory modalities, as cross-modal facilitation has been shown in pre-cueing studies using auditory stimuli prior to the onset of a visual target. In the present study, participants detected low-contrast Gabor patches in a speeded response task. These patches were presented in the left or right visual periphery, preceded or followed by a lateralized and task-irrelevant sound at 4 stimulus-onset asynchronies (SOA; −600 ms, −150 ms, +150 ms, +450 ms). We found that pre-cueing at the −150 ms SOA led to a general increase in detection performance irrespective of the sound’s location relative to the target. On top of this temporal effect, sound-cues also had a spatially specific effect, with further improvement when cue and target originated from the same location. Critically, the temporal effect was absent, but the spatial effect was present in the short-SOA retro-cueing condition (+150 ms). Drift-diffusion analysis of the response time distributions allowed us to better characterize the evidenced effects. Overall, our results show that sounds can facilitate visual processing, both pre- and retro-actively, indicative of a flexible and multisensory attentional system that underlies our conscious visual experience.


2001 ◽  
Vol 15 (4) ◽  
pp. 256-274 ◽  
Author(s):  
Caterina Pesce ◽  
Rainer Bösel

Abstract In the present study we explored the focusing of visuospatial attention in subjects practicing and not practicing activities with high attentional demands. Similar to the studies of Castiello and Umiltà (e. g., 1990) , our experimental procedure was a variation of Posner's (1980) basic paradigm for exploring covert orienting of visuospatial attention. In a simple RT-task, a peripheral cue of varying size was presented unilaterally or bilaterally from a central fixation point and followed by a target at different stimulus-onset-asynchronies (SOAs). The target could occur validly inside the cue or invalidly outside the cue with varying spatial relation to its boundary. Event-related brain potentials (ERPs) and reaction times (RTs) were recorded to target stimuli under the different task conditions. RT and ERP findings showed converging aspects as well as dissociations. Electrophysiological results revealed an amplitude modulation of the ERPs in the early and late Nd time interval at both anterior and posterior scalp sites, which seems to be related to the effects of peripheral informative cues as well as to the attentional expertise. Results were: (1) shorter latency effects confirm the positive-going amplitude enhancement elicited by unilateral peripheral cues and strengthen the criticism against the neutrality of spatially nonpredictive peripheral cueing of all possible target locations which is often presumed in behavioral studies. (2) Longer latency effects show that subjects with attentional expertise modulate the distribution of the attentional resources in the visual space differently than nonexperienced subjects. Skilled practice may lead to minimizing attentional costs by automatizing the use of a span of attention that is adapted to the most frequent task demands and endogenously increases the allocation of resources to cope with less usual attending conditions.


2021 ◽  
Vol 29 ◽  
pp. 297-309
Author(s):  
Xiaohui Chen ◽  
Wenbo Sun ◽  
Dan Xu ◽  
Jiaojiao Ma ◽  
Feng Xiao ◽  
...  

BACKGROUND: Computed tomography (CT) imaging combined with artificial intelligence is important in the diagnosis and prognosis of lung diseases. OBJECTIVE: This study aimed to investigate temporal changes of quantitative CT findings in patients with COVID-19 in three clinic types, including moderate, severe, and non-survivors, and to predict severe cases in the early stage from the results. METHODS: One hundred and two patients with confirmed COVID-19 were included in this study. Based on the time interval between onset of symptoms and the CT scan, four stages were defined in this study: Stage-1 (0 ∼7 days); Stage-2 (8 ∼ 14 days); Stage-3 (15 ∼ 21days); Stage-4 (> 21 days). Eight parameters, the infection volume and percentage of the whole lung in four different Hounsfield (HU) ranges, ((-, -750), [-750, -300), [-300, 50) and [50, +)), were calculated and compared between different groups. RESULTS: The infection volume and percentage of four HU ranges peaked in Stage-2. The highest proportion of HU [-750, 50) was found in the infected regions in non-survivors among three groups. CONCLUSIONS: The findings indicate rapid deterioration in the first week since the onset of symptoms in non-survivors. Higher proportion of HU [-750, 50) in the lesion area might be a potential bio-marker for poor prognosis in patients with COVID-19.


1999 ◽  
Vol 11 (1) ◽  
pp. 21-66 ◽  
Author(s):  
Douglas A. Miller ◽  
Steven W. Zucker

We present a model of visual computation based on tightly inter-connected cliques of pyramidal cells. It leads to a formal theory of cell assemblies, a specific relationship between correlated firing patterns and abstract functionality, and a direct calculation relating estimates of cortical cell counts to orientation hyperacuity. Our network architecture is unique in that (1) it supports a mode of computation that is both reliable and efficent; (2) the current-spike relations are modeled as an analog dynamical system in which the requisite computations can take place on the time scale required for an early stage of visual processing; and (3) the dynamics are triggered by the spatiotemporal response of cortical cells. This final point could explain why moving stimuli improve vernier sensitivity.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2012 ◽  
Vol 24 (2) ◽  
pp. 521-529 ◽  
Author(s):  
Frank Oppermann ◽  
Uwe Hassler ◽  
Jörg D. Jescheniak ◽  
Thomas Gruber

The human cognitive system is highly efficient in extracting information from our visual environment. This efficiency is based on acquired knowledge that guides our attention toward relevant events and promotes the recognition of individual objects as they appear in visual scenes. The experience-based representation of such knowledge contains not only information about the individual objects but also about relations between them, such as the typical context in which individual objects co-occur. The present EEG study aimed at exploring the availability of such relational knowledge in the time course of visual scene processing, using oscillatory evoked gamma-band responses as a neural correlate for a currently activated cortical stimulus representation. Participants decided whether two simultaneously presented objects were conceptually coherent (e.g., mouse–cheese) or not (e.g., crown–mushroom). We obtained increased evoked gamma-band responses for coherent scenes compared with incoherent scenes beginning as early as 70 msec after stimulus onset within a distributed cortical network, including the right temporal, the right frontal, and the bilateral occipital cortex. This finding provides empirical evidence for the functional importance of evoked oscillatory activity in high-level vision beyond the visual cortex and, thus, gives new insights into the functional relevance of neuronal interactions. It also indicates the very early availability of experience-based knowledge that might be regarded as a fundamental mechanism for the rapid extraction of the gist of a scene.


2006 ◽  
Vol 18 (8) ◽  
pp. 1394-1405 ◽  
Author(s):  
Gijs Plomp ◽  
Lichan Liu ◽  
Cees van Leeuwen ◽  
Andreas A. Ioannides

We investigated the process of amodal completion in a same-different experiment in which test pairs were preceded by sequences of two figures. The first of these could be congruent to a global or local completion of an occluded part in the second figure, or a mosaic interpretation of it. We recorded and analyzed the magnetoencephalogram for the second figures. Compared to control conditions, in which unrelated primes were shown, occlusion and mosaic primes reduced the peak latency and amplitude of neural activity evoked by the occlusion patterns. Compared to occlusion primes, mosaic ones reduced the latency but increased the amplitude of evoked neural activity. Processes relating to a mosaic interpretation of the occlusion pattern, therefore, can dominate in an early stage of visual processing. The results did not provide evidence for the presence of a functional “mosaic stage” in completion per se, but characterize the mosaic interpretation as a qualitatively special one that can rapidly emerge in visual processing when context favors it.


Author(s):  
Zhiao Zhao ◽  
Yong Zhang ◽  
Guanjun Liu ◽  
Jing Qiu

Sample allocation and selection technology is of great significance in the test plan design of prognostics validation. Considering the existing researches, the importance of prognostics samples of different moments is not considered in the degradation process of a single failure. Normally, prognostics samples are generated under the same time interval mechanism. However, a prognostics system may have low prognostics accuracy because of the small quantity of failure degradation and measurement randomness in the early stage of a failure degradation process. Historical degradation data onto equipment failure modes are collected, and the degradation process model based on the multi-stage Wiener process is established. Based on the multi-stage Wiener process model, we choose four parameters to describe different degradation stages in a degradation process. According to four parameters, the sample selection weight of each degradation stage is calculated and the weight of each degradation stage is used to select prognostics samples. Taking a bearing wear fault of a helicopter transmission device as an example, its degradation process is established and sample selection weights are calculated. According to the sample selection weight of each degradation process, we accomplish the prognostics sample selection of the bearing wear fault. The results show that the prognostics sample selection method proposed in this article has good applicability.


Sign in / Sign up

Export Citation Format

Share Document