visual processing
Recently Published Documents


TOTAL DOCUMENTS

3406
(FIVE YEARS 903)

H-INDEX

113
(FIVE YEARS 9)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Jean-Philippe Thivierge ◽  
Artem Pilzak

AbstractCommunication across anatomical areas of the brain is key to both sensory and motor processes. Dimensionality reduction approaches have shown that the covariation of activity across cortical areas follows well-delimited patterns. Some of these patterns fall within the "potent space" of neural interactions and generate downstream responses; other patterns fall within the "null space" and prevent the feedforward propagation of synaptic inputs. Despite growing evidence for the role of null space activity in visual processing as well as preparatory motor control, a mechanistic understanding of its neural origins is lacking. Here, we developed a mean-rate model that allowed for the systematic control of feedforward propagation by potent and null modes of interaction. In this model, altering the number of null modes led to no systematic changes in firing rates, pairwise correlations, or mean synaptic strengths across areas, making it difficult to characterize feedforward communication with common measures of functional connectivity. A novel measure termed the null ratio captured the proportion of null modes relayed from one area to another. Applied to simultaneous recordings of primate cortical areas V1 and V2 during image viewing, the null ratio revealed that feedforward interactions have a broad null space that may reflect properties of visual stimuli.


2022 ◽  
Vol 8 (1) ◽  
Author(s):  
Kyongje Sung ◽  
Hanna Glazer ◽  
Jessica O’Grady ◽  
Mindy L. McEntee ◽  
Laura Bosley ◽  
...  

Abstract Background Although visual abnormalities are considered common in individuals with autism spectrum disorders, the associated electrophysiological markers have remained elusive. One impediment has been that methodological challenges often preclude testing individuals with low-functioning autism (LFA). Methods In this feasibility and pilot study, we tested a hybrid visual evoked potential paradigm tailored to individuals with LFA that combines passively presented visual stimuli to elicit scalp-recorded evoked responses with a behavioral paradigm to maintain visual attention. We conducted a pilot study to explore differences in visual evoked response patterns across three groups: individuals with LFA, with high-functioning autism (HFA), and with typical development. Results All participants with LFA met criteria for study feasibility by completing the recordings and producing measurable cortical evoked waveform responses. The LFA group had longer (delayed) cortical response latencies on average as compared with the HFA and typical development groups. We also observed group differences in visually induced alpha spectral power: the LFA group showed little to no prestimulus alpha activity in contrast to the HFA and typical development groups that showed increased prestimulus alpha activity. This observation was confirmed by the bootstrapped confidence intervals, suggesting that the absence of prestimulus alpha power may be a potential electrophysiological marker of LFA. Conclusion Our results confirm the utility of tailoring visual electrophysiology paradigms to individuals with LFA in order to facilitate inclusion of individuals across the autism spectrum in studies of visual processing.


2022 ◽  
Vol 15 ◽  
Author(s):  
Zachary J. Sharpe ◽  
Angela Shehu ◽  
Tomomi Ichinose

In the retina, evolutionary changes can be traced in the topography of photoreceptors. The shape of the visual streak depends on the height of the animal and its habitat, namely, woods, prairies, or mountains. Also, the distribution of distinct wavelength-sensitive cones is unique to each animal. For example, UV and green cones reside in the ventral and dorsal regions in the mouse retina, respectively, whereas in the rat retina these cones are homogeneously distributed. In contrast with the abundant investigation on the distribution of photoreceptors and the third-order neurons, the distribution of bipolar cells has not been well understood. We utilized two enhanced green fluorescent protein (EGFP) mouse lines, Lhx4-EGFP (Lhx4) and 6030405A18Rik-EGFP (Rik), to examine the topographic distributions of bipolar cells in the retina. First, we characterized their GFP-expressing cells using type-specific markers. We found that GFP was expressed by type 2, type 3a, and type 6 bipolar cells in the Rik mice and by type 3b, type 4, and type 5 bipolar cells in the Lhx4 mice. All these types are achromatic. Then, we examined the distributions of bipolar cells in the four cardinal directions and three different eccentricities of the retinal tissue. In the Rik mice, GFP-expressing bipolar cells were more highly observed in the nasal region than those in the temporal retina. The number of GFP cells was not different along with the ventral-dorsal axis. In contrast, in the Lhx4 mice, GFP-expressing cells occurred at a higher density in the ventral region than in the dorsal retina. However, no difference was observed along the nasal-temporal axis. Furthermore, we examined which type of bipolar cells contributed to the asymmetric distributions in the Rik mice. We found that type 3a bipolar cells occurred at a higher density in the temporal region, whereas type 6 bipolar cells were denser in the nasal region. The asymmetricity of these bipolar cells shaped the uneven distribution of the GFP cells in the Rik mice. In conclusion, we found that a subset of achromatic bipolar cells is asymmetrically distributed in the mouse retina, suggesting their unique roles in achromatic visual processing.


2022 ◽  
Author(s):  
Yongrong Qiu ◽  
David A Klindt ◽  
Klaudia P Szatko ◽  
Dominic Gonschorek ◽  
Larissa Hoefling ◽  
...  

Neural system identification aims at learning the response function of neurons to arbitrary stimuli using experimentally recorded data, but typically does not leverage coding principles such as efficient coding of natural environments. Visual systems, however, have evolved to efficiently process input from the natural environment. Here, we present a normative network regularization for system identification models by incorporating, as a regularizer, the efficient coding hypothesis, which states that neural response properties of sensory representations are strongly shaped by the need to preserve most of the stimulus information with limited resources. Using this approach, we explored if a system identification model can be improved by sharing its convolutional filters with those of an autoencoder which aims to efficiently encode natural stimuli. To this end, we built a hybrid model to predict the responses of retinal neurons to noise stimuli. This approach did not only yield a higher performance than the stand-alone system identification model, it also produced more biologically-plausible filters. We found these results to be consistent for retinal responses to different stimuli and across model architectures. Moreover, our normatively regularized model performed particularly well in predicting responses of direction-of-motion sensitive retinal neurons. In summary, our results support the hypothesis that efficiently encoding environmental inputs can improve system identification models of early visual processing.


2022 ◽  
Author(s):  
Yujia Peng ◽  
Joseph M Burling ◽  
Greta K Todorova ◽  
Catherine Neary ◽  
Frank E Pollick ◽  
...  

When viewing the actions of others, we not only see patterns of body movements, but we also "see" the intentions and social relations of people, enabling us to understand the surrounding social environment. Previous research has shown that experienced forensic examiners, Closed Circuit Television (CCTV) operators, convey superior performance in identifying and predicting hostile intentions from surveillance footages than novices. However, it remains largely unknown what visual content CCTV operators actively attend to when viewing surveillance footage, and whether CCTV operators develop different strategies for active information seeking from what novices do. In this study, we conducted computational analysis for the gaze-centered stimuli captured by experienced CCTV operators and novices' eye movements when they viewed the same surveillance footage. These analyses examined how low-level visual features and object-level semantic features contribute to attentive gaze patterns associated with the two groups of participants. Low-level image features were extracted by a visual saliency model, whereas object-level semantic features were extracted by a deep convolutional neural network (DCNN), AlexNet, from gaze-centered regions. We found that visual regions attended by CCTV operators versus by novices can be reliably classified by patterns of saliency features and DCNN features. Additionally, CCTV operators showed greater inter-subject correlation in attending to saliency features and DCNN features than did novices. These results suggest that the looking behavior of CCTV operators differs from novices by actively attending to different patterns of saliency and semantic features in both low-level and high-level visual processing. Expertise in selectively attending to informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions.


2022 ◽  
pp. 1-54
Author(s):  
Doris Voina ◽  
Stefano Recanatesi ◽  
Brian Hu ◽  
Eric Shea-Brown ◽  
Stefan Mihalas

Abstract As animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit. Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatiotemporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.


2022 ◽  
Vol 18 (1) ◽  
pp. e1009771
Author(s):  
Eline R. Kupers ◽  
Noah C. Benson ◽  
Marisa Carrasco ◽  
Jonathan Winawer

Visual performance varies around the visual field. It is best near the fovea compared to the periphery, and at iso-eccentric locations it is best on the horizontal, intermediate on the lower, and poorest on the upper meridian. The fovea-to-periphery performance decline is linked to the decreases in cone density, retinal ganglion cell (RGC) density, and V1 cortical magnification factor (CMF) as eccentricity increases. The origins of polar angle asymmetries are not well understood. Optical quality and cone density vary across the retina, but recent computational modeling has shown that these factors can only account for a small percentage of behavior. Here, we investigate how visual processing beyond the cone photon absorptions contributes to polar angle asymmetries in performance. First, we quantify the extent of asymmetries in cone density, midget RGC density, and V1 CMF. We find that both polar angle asymmetries and eccentricity gradients increase from cones to mRGCs, and from mRGCs to cortex. Second, we extend our previously published computational observer model to quantify the contribution of phototransduction by the cones and spatial filtering by mRGCs to behavioral asymmetries. Starting with photons emitted by a visual display, the model simulates the effect of human optics, cone isomerizations, phototransduction, and mRGC spatial filtering. The model performs a forced choice orientation discrimination task on mRGC responses using a linear support vector machine classifier. The model shows that asymmetries in a decision maker’s performance across polar angle are greater when assessing the photocurrents than when assessing isomerizations and are greater still when assessing mRGC signals. Nonetheless, the polar angle asymmetries of the mRGC outputs are still considerably smaller than those observed from human performance. We conclude that cone isomerizations, phototransduction, and the spatial filtering properties of mRGCs contribute to polar angle performance differences, but that a full account of these differences will entail additional contribution from cortical representations.


2022 ◽  
Author(s):  
Sarune Savickaite ◽  
Neil McDonnell ◽  
David Simmons

One approach to characterizing human perceptual organization is to distinguish global and local processing. In visual perception, global processing enables us to extract the ‘gist’ of the visual information and local processing helps us to perceive the details. Individual differences in these two types of visual processing have been found in conditions like autism and ADHD. The Rey-Osterrieth Complex Figure (ROCF) test is commonly used to investigate differences between local and global processing. Whilst Virtual Reality (VR) has become more accessible, cheaper, and widely used in psychological research, no previous study has investigated local vs global perceptual differences using immersive technology. In this study, we investigated individual differences in local and global processing as a function of autistic and ADHD traits. The ROCF was presented in the virtual environment and a standard protocol for using the figure was followed. A novel method of quantitative data extraction was used, which will be described in this paper in greater detail. Whilst some performance differences were found between experimental conditions, no relationship was observed between these differences and participants’ levels of autistic and ADHD traits. Limitations of the study and implications of the novel methodology are discussed.


2022 ◽  
Author(s):  
Annie Warman ◽  
Stephanie Rossit ◽  
George Law Malcolm ◽  
Allan Clark

It’s been repeatedly shown that pictures of graspable objects can facilitate visual processing and motor responses, even in the absence of reach-to-grasp actions, an effect often attributed the concept of affordances, originally introduced by Gibson (1979). A classic demonstration of this is the handle compatibility effect, which is characterised by faster reaction times when the orientation of a graspable object’s handle is compatible with the hand used to respond, even when the handle orientation is task irrelevant. Nevertheless, whether faster RTs are due to affordances or spatial compatibility effects has been significantly debated. In the proposed studies, we will use a stimulus-response compatibility paradigm to investigate firstly, whether we can replicate the handle compatibility effect while controlling for spatial compatibility. Here, participants will respond with left- or right-handed keypresses to whether the object is upright or inverted and, in separate blocks, whether the object is red or green. RTs will be analysed using repeated-measures ANOVAs. In line with an affordance account, we hypothesise that there will be larger handle compatibility effects for upright/inverted compared to colour judgements, as colour judgements do not require object identification and are not thought to elicit affordances. Secondly, we will investigate whether the handle compatibility effect shows a lower visual field (VF) advantage in line with functional lower VF advantages observed for hand actions. We expect larger handle compatibility effects for objects viewed in the lower VF than upper VF, given that the lower VF is the space where actions most frequently occur.


Vision ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 3
Author(s):  
Rébaï Soret ◽  
Pom Charras ◽  
Christophe Hurter ◽  
Vsevolod Peysakhovich

Recent studies on covert attention suggested that the visual processing of information in front of us is different, depending on whether the information is present in front of us or if it is a reflection of information behind us (mirror information). This difference in processing suggests that we have different processes for directing our attention to objects in front of us (front space) or behind us (rear space). In this study, we investigated the effects of attentional orienting in front and rear space consecutive of visual or auditory endogenous cues. Twenty-one participants performed a modified version of the Posner paradigm in virtual reality during a spaceship discrimination task. An eye tracker integrated into the virtual reality headset was used to make sure that the participants did not move their eyes and used their covert attention. The results show that informative cues produced faster response times than non-informative cues but no impact on target identification was observed. In addition, we observed faster response times when the target occurred in front space rather than in rear space. These results are consistent with an orienting cognitive process differentiation in the front and rear spaces. Several explanations are discussed. No effect was found on subjects’ eye movements, suggesting that participants did not use their overt attention to improve task performance.


Sign in / Sign up

Export Citation Format

Share Document