Altered neuronal activity in the visual processing region of eye-fluke-infected fish

Parasitology ◽  
2020 ◽  
pp. 1-7
Author(s):  
Anthony Stumbo ◽  
Robert Poulin ◽  
Brandon Ruehle

Abstract Fish, like most vertebrates, are dependent on vision to varying degrees for a variety of behaviours such as predator avoidance and foraging. Disruption of this key sensory system therefore should have some impact on the ability of fish to execute these tasks. Eye-flukes, such as Tylodelphys darbyi, often infect fish where they are known to inflict varying degrees of visual impairment. In New Zealand, T. darbyi infects the eyes of Gobiomorphus cotidianus, a freshwater fish, where it resides in the vitreous chamber between the lens and retina. Here, we investigate whether the presence of the parasite in the eye has an impact on neuronal information transfer using the c-Fos gene as a proxy for neuron activation. We hypothesized that the parasite would reduce visual information entering the eye and therefore result in lower c-Fos expression. Interestingly, however, c-Fos expression increased with T. darbyi intensity when fish were exposed to flashes of light. Our results suggest a mechanism for parasite-induced visual disruption when no obvious pathology is caused by infection. The more T. darbyi present the more visual stimuli the fish is presented with, and as such may experience difficulties in distinguishing various features of its external environment.

2013 ◽  
Vol 340 ◽  
pp. 339-343
Author(s):  
Heng Zhang ◽  
Jie Wang

The visual system is human know the important sensory system of the external world, due to a variety of diseases or other injury, leading to the increasing number of blind visual loss, the visual cortex prosthesis research is expected to provide a way of blind sight. Based on unique information transfer mechanism of the biological visual system, the visual cortex prosthesis framework is designed, and for the core module of the prosthesis, it is proposed based on the simple cell selective characteristics of visual system, sparse response characteristics, synchronous oscillations and other image information coding strategy of the mechanism. The algorithm is used for the actual image analysis, to compared with conventional sparse representation method, under the prerequisite of assuring image quality, the strategy can be used as little as possible neuronal to characterize the important information of natural images, thus effectively reducing the prosthesis embedded cortex stimulation electrode quantity, to achieve better information transfer effect.


2020 ◽  
Author(s):  
Sanjeev Nara ◽  
Mikel Lizarazu ◽  
Craig G Richter ◽  
Diana C Dima ◽  
Mathieu Bourguignon ◽  
...  

AbstractPredictive processing has been proposed as a fundamental cognitive mechanism to account for how the brain interacts with the external environment via its sensory modalities. The brain processes external information about the content (i.e. “what”) and timing (i.e., “when”) of environmental stimuli to update an internal generative model of the world around it. However, the interaction between “what” and “when” has received very little attention when focusing on vision. In this magnetoencephalography (MEG) study we investigate how processing of feature specific information (i.e. “what”) is affected by temporal predictability (i.e. “when”). In line with previous findings, we observed a suppression of evoked neural responses in the visual cortex for predictable stimuli. Interestingly, we observed that temporal uncertainty enhances this expectation suppression effect. This suggests that in temporally uncertain scenarios the neurocognitive system relies more on internal representations and invests less resources integrating bottom-up information. Indeed, temporal decoding analysis indicated that visual features are encoded for a shorter time period by the neural system when temporal uncertainty is higher. This supports the fact that visual information is maintained active for less time for a stimulus whose time onset is unpredictable compared to when it is predictable. These findings highlight the higher reliance of the visual system on the internal expectations when the temporal dynamics of the external environment are less predictable.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2014 ◽  
Author(s):  
James Trousdale ◽  
Samuel R. Carroll ◽  
Fabrizio Gabbiani ◽  
Krešimir Josić

Coupling between sensory neurons impacts their tuning properties and correlations in their responses. How such coupling affects sensory representations and ultimately behavior remains unclear. We investigated the role of neuronal coupling during visual processing using a realistic biophysical model of the vertical system (VS) cell network in the blow fly. These neurons are thought to encode the horizontal rotation axis during rapid free flight manoeuvres. Experimental findings suggest neurons of the vertical system are strongly electrically coupled, and that several downstream neurons driving motor responses to ego-rotation receive inputs primarily from a small subset of VS cells. These downstream neurons must decode information about the axis of rotation from a partial readout of the VS population response. To investigate the role of coupling, we simulated the VS response to a variety of rotating visual scenes and computed optimal Bayesian estimates from the relevant subset of VS cells. Our analysis shows that coupling leads to near-optimal estimates from a subpopulation readout. In contrast, coupling between VS cells has no impact on the quality of encoding in the response of the full population. We conclude that coupling at one level of the fly visual system allows for near-optimal decoding from partial information at the subsequent, pre-motor level. Thus, electrical coupling may provide a general mechanism to achieve near-optimal information transfer from neuronal subpopulations across organisms and modalities.


2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 58 ◽  
Author(s):  
J Daniel McCarthy ◽  
Colin Kupitz ◽  
Gideon P Caplovitz

Our perception of an object’s size arises from the integration of multiple sources of visual information including retinal size, perceived distance and its size relative to other objects in the visual field. This constructive process is revealed through a number of classic size illusions such as the Delboeuf Illusion, the Ebbinghaus Illusion and others illustrating size constancy. Here we present a novel variant of the Delbouef and Ebbinghaus size illusions that we have named the Binding Ring Illusion. The illusion is such that the perceived size of a circular array of elements is underestimated when superimposed by a circular contour – a binding ring – and overestimated when the binding ring slightly exceeds the overall size of the array. Here we characterize the stimulus conditions that lead to the illusion, and the perceptual principles that underlie it. Our findings indicate that the perceived size of an array is susceptible to the assimilation of an explicitly defined superimposed contour. Our results also indicate that the assimilation process takes place at a relatively high level in the visual processing stream, after different spatial frequencies have been integrated and global shape has been constructed. We hypothesize that the Binding Ring Illusion arises due to the fact that the size of an array of elements is not explicitly defined and therefore can be influenced (through a process of assimilation) by the presence of a superimposed object that does have an explicit size.


2020 ◽  
Author(s):  
Han Zhang ◽  
Nicola C Anderson ◽  
Kevin Miller

Recent studies have shown that mind-wandering (MW) is associated with changes in eye movement parameters, but have not explored how MW affects the sequential pattern of eye movements involved in making sense of complex visual information. Eye movements naturally unfold over time and this process may reveal novel information about cognitive processing during MW. The current study used Recurrence Quantification Analysis (Anderson, Bischof, Laidlaw, Risko, & Kingstone, 2013) to describe the pattern of refixations (fixations directed to previously-inspected regions) during MW. Participants completed a real-world scene encoding task and responded to thought probes assessing intentional and unintentional MW. Both types of MW were associated with worse memory of the scenes. Importantly, RQA showed that scanpaths during unintentional MW were more repetitive than during on-task episodes, as indicated by a higher recurrence rate and more stereotypical fixation sequences. This increased repetitiveness suggests an adaptive response to processing failures through re-examining previous locations. Moreover, this increased repetitiveness contributed to fixations focusing on a smaller spatial scale of the stimuli. Finally, we were also able to validate several traditional measures: both intentional and unintentional MW were associated with fewer and longer fixations; Eye-blinking increased numerically during both types of MW but the difference was only significant for unintentional MW. Overall, the results advanced our understanding of how visual processing is affected during MW by highlighting the sequential aspect of eye movements.


Author(s):  
Mohammad S.E Sendi ◽  
Godfrey D Pearlson ◽  
Daniel H Mathalon ◽  
Judith M Ford ◽  
Adrian Preda ◽  
...  

Although visual processing impairments have been explored in schizophrenia (SZ), their underlying neurobiology of the visual processing impairments has not been widely studied. Also, while some research has hinted at differences in information transfer and flow in SZ, there are few investigations of the dynamics of functional connectivity within visual networks. In this study, we analyzed resting-state fMRI data of the visual sensory network (VSN) in 160 healthy control (HC) subjects and 151 SZ subjects. We estimated 9 independent components within the VSN. Then, we calculated the dynamic functional network connectivity (dFNC) using the Pearson correlation. Next, using k-means clustering, we partitioned the dFNCs into five distinct states, and then we calculated the portion of time each subject spent in each state, that we termed the occupancy rate (OCR). Using OCR, we compared HC with SZ subjects and investigated the link between OCR and visual learning in SZ subjects. Besides, we compared the VSN functional connectivity of SZ and HC subjects in each state. We found that this network is indeed highly dynamic. Each state represents a unique connectivity pattern of fluctuations in VSN FNC, and all states showed significant disruption in SZ. Overall, HC showed stronger connectivity within the VSN in states. SZ subjects spent more time in a state in which the connectivity between the middle temporal gyrus and other regions of VNS is highly negative. Besides, OCR in a state with strong positive connectivity between middle temporal gyrus and other regions correlated significantly with visual learning scores in SZ.


Author(s):  
Angie M. Michaiel ◽  
Elliott T.T. Abe ◽  
Cristopher M. Niell

ABSTRACTMany studies of visual processing are conducted in unnatural conditions, such as head- and gaze-fixation. As this radically limits natural exploration of the visual environment, there is much less known about how animals actively use their sensory systems to acquire visual information in natural, goal-directed contexts. Recently, prey capture has emerged as an ethologically relevant behavior that mice perform without training, and that engages vision for accurate orienting and pursuit. However, it is unclear how mice target their gaze during such natural behaviors, particularly since, in contrast to many predatory species, mice have a narrow binocular field and lack foveate vision that would entail fixing their gaze on a specific point in the visual field. Here we measured head and bilateral eye movements in freely moving mice performing prey capture. We find that the majority of eye movements are compensatory for head movements, thereby acting to stabilize the visual scene. During head turns, however, these periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Analysis of eye movements relative to the cricket position shows that the saccades do not preferentially select a specific point in the visual scene. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings help relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


Sign in / Sign up

Export Citation Format

Share Document