scholarly journals Modeling Binocular and Motion Transparency Processing by Local Center-Surround Interactions

Author(s):  
Florian Raudies ◽  
Heiko Neumann

Binocular transparency is perceived if two surfaces are seen in the same spatial location, but at different depths. Similarly, motion transparency occurs if two surfaces move differently over the same spatial location. Most models of motion or stereo processing incorporate uniqueness assumptions to resolve ambiguities of disparity or motion estimates and, thus, can not represent multiple features at the same spatial location. Unlike these previous models, the authors of this chapter suggest a model with local center-surround interaction that operates upon analogs of cell populations in velocity or disparity domain of the ventral second visual area (V2) and dorsal medial middle temporal area (MT) in primates, respectively. These modeled cell populations can encode motion and binocular transparency. Model simulations demonstrate the successful processing of scenes with opaque and transparent materials, not previously reported. Results suggest that motion and stereo processing both employ local center-surround interactions to resolve noisy and ambiguous disparity or motion input from initial correlations.

2020 ◽  
Vol 8 (Suppl 3) ◽  
pp. A52-A52
Author(s):  
Elen Torres ◽  
Stefani Spranger

BackgroundUnderstanding the interactions between tumor and immune cells is critical for improving current immunotherapies. Pre-clinical and clinical evidence has shown that failed T cell infiltration into lung cancer lesions might be associated with low responsiveness towards checkpoint blockade.1 For this reason, it is necessary to characterize not only the phenotype of T cells in tumor-bearing lungs but also their spatial location in the tumor microenvironment (TME). Multiplex immunofluorescence staining allows the simultaneous use of several cell markers to study the state and the spatial location of cell populations in the tissue of interest. Although this technique is usually applied to thin tissue sections (5 to 12 µm), the analysis of large tissue volumes may provide a better understanding of the spatial distribution of cells in relation to the TME. Here, we analyzed the number and spatial distribution of cytotoxic T cells and other immune cells in the TME of tumor-bearing lungs, using both 12 µm sections and whole-mount preparations imaged by confocal microscopy.MethodsLung tumors were induced in C57BL/6 mice by tail vein injection of a cancer cell line derived from KrasG12D/+ and Tp53-/- mice. Lung tissue with a diverse degree of T cell infiltration was collected after 21 days post tumor induction. Tissue was fixed in 4% PFA, followed by snap-frozen for sectioning. Whole-mount preparations were processed according to Weizhe Li et al. (2019) 2 for tissue clearing and multiplex volume imaging. T cells were labeled with CD8 and FOXP3 antibodies to identify cytotoxic or regulatory T cells, respectively. Tumor cells were labeled with a pan-Keratin antibody. Images were acquired using a Leica SP8 confocal microscope. FIJI3 and IMARIS were used for image processing.ResultsWe identified both cytotoxic and regulatory T cell populations in the TME using thin sections and whole-mount. However, using whole-mount after tissue clearing allowed us to better evaluate the spatial distribution of the T cell populations in relation to the tumor structure. Furthermore, tissue clearance facilitates the imaging of larger volumes using multiplex immunofluorescence.ConclusionsAnalysis of large lung tissue volumes provides a better understanding of the location of immune cell populations in relation to the TME and allows to study heterogeneous immune infiltration on a per-lesion base. This valuable information will improve the characterization of the TME and the definition of cancer-immune phenotypes in NSCLC.ReferencesTeng MW, et al., Classifying cancers based on T-cell infiltration and PD-L1. Cancer Res 2015;75(11): p. 2139–45.Li W, Germain RN, and Gerner MY. High-dimensional cell-level analysis of tissues with Ce3D multiplex volume imaging. Nat Protoc 2019;14(6): p. 1708–1733.Schindelin J, et al, Fiji: an open-source platform for biological-image analysis. Nat Methods 2012;9(7): p. 676–82.


2011 ◽  
Vol 23 (11) ◽  
pp. 2868-2914 ◽  
Author(s):  
Florian Raudies ◽  
Ennio Mingolla ◽  
Heiko Neumann

Motion transparency occurs when multiple coherent motions are perceived in one spatial location. Imagine, for instance, looking out of the window of a bus on a bright day, where the world outside the window is passing by and movements of passengers inside the bus are reflected in the window. The overlay of both motions at the window leads to motion transparency, which is challenging to process. Noisy and ambiguous motion signals can be reduced using a competition mechanism for all encoded motions in one spatial location. Such a competition, however, leads to the suppression of multiple peak responses that encode different motions, as only the strongest response tends to survive. As a solution, we suggest a local center-surround competition for population-encoded motion directions and speeds. Similar motions are supported, and dissimilar ones are separated, by representing them as multiple activations, which occurs in the case of motion transparency. Psychophysical findings, such as motion attraction and repulsion for motion transparency displays, can be explained by this local competition. Besides this local competition mechanism, we show that feedback signals improve the processing of motion transparency. A discrimination task for transparent versus opaque motion is simulated, where motion transparency is generated by superimposing large field motion patterns of either varying size or varying coherence of motion. The model’s perceptual thresholds with and without feedback are calculated. We demonstrate that initially weak peak responses can be enhanced and stabilized through modulatory feedback signals from higher stages of processing.


2017 ◽  
Vol 118 (3) ◽  
pp. 1903-1913 ◽  
Author(s):  
Amy M. Ni ◽  
John H. R. Maunsell

Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803–813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375–1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical area can be largely explained by between-neuron differences in normalization strength. Here we demonstrate that attention modulation size varies within neurons as well and that this variance is largely explained by within-neuron differences in normalization strength. We provide a new spatially tuned normalization model that explains this broad range of observed normalization and attention effects.


2021 ◽  
Vol 12 ◽  
Author(s):  
Rony Cohen ◽  
Jacob Genizi ◽  
Liora Korenrich

Objective: Tuberous sclerosis complex (TSC) is a multisystem neurocutaneous genetic disorder. The clinical manifestations are extensive and include neurological, dermatological, cardiac, ophthalmic, nephrological, and neuropsychiatric manifestations. The prediction and pathophysiology of neuropsychiatric disorders such as emotional symptoms, conduct problems, hyperactivity, and poor social behavior are poorly understood. The aim of the study was to diagnose neuropsychiatric symptoms in individuals with TSC, and to examine their possible correlations with quantity, magnitude, and spatial location of tubers and radial migration (RM) lines.Methods: The cohort comprised 16 individuals with TSC, aged 5–29 years, with normal or low normal intelligence. The participants or their parents were requested to fill Strengths and Difficulties Questionnaire (SDQ) and the TAND (TSC-associated neuropsychiatric disorders) Checklist for assessment of their neuropsychiatric symptoms. Correlations were examined between these symptoms and the magnitude, quantities, and locations of tubers and white matter RM lines, as identified in T2/FLAIR brain MRI scans.Results: The SDQ score for peer relationship problems showed correlation with the tuber load (r = 0.52, p < 0.05). Tuber load and learning difficulties correlated significantly in the temporal and parietal area. Mood swings correlated with tubers in the parietal area (r = 0.529, p < 0.05). RM lines in the temporal area correlated with abnormal total SDQ (r = 0.51, p < 0.05). Anxiety and extreme shyness were correlated with RM lines in the parietal area, r = 0.513, p < 0.05 and r = 0.593, p < 0.05, respectively. Hyperactive/inattention correlated negatively with RM lines in the parietal area (r = −707, p < 0.01).Conclusions: These observations may lead to future studies for precise localization of neuropsychiatric symptoms, thereby facilitating directed therapy.


2020 ◽  
Vol 24 (3) ◽  
pp. 1227-1249 ◽  
Author(s):  
Moshe Armon ◽  
Francesco Marra ◽  
Yehouda Enzel ◽  
Dorita Rostkier-Edelstein ◽  
Efrat Morin

Abstract. Heavy precipitation events (HPEs) can lead to natural hazards (e.g. floods and debris flows) and contribute to water resources. Spatiotemporal rainfall patterns govern the hydrological, geomorphological, and societal effects of HPEs. Thus, a correct characterisation and prediction of rainfall patterns is crucial for coping with these events. Information from rain gauges is generally limited due to the sparseness of the networks, especially in the presence of sharp climatic gradients. Forecasting HPEs depends on the ability of weather models to generate credible rainfall patterns. This paper characterises rainfall patterns during HPEs based on high-resolution weather radar data and evaluates the performance of a high-resolution, convection-permitting Weather Research and Forecasting (WRF) model in simulating these patterns. We identified 41 HPEs in the eastern Mediterranean from a 24-year radar record using local thresholds based on quantiles for different durations, classified these events into two synoptic systems, and ran model simulations for them. For most durations, HPEs near the coastline were characterised by the highest rain intensities; however, for short durations, the highest rain intensities were found for the inland desert. During the rainy season, the rain field's centre of mass progresses from the sea inland. Rainfall during HPEs is highly localised in both space (less than a 10 km decorrelation distance) and time (less than 5 min). WRF model simulations were accurate in generating the structure and location of the rain fields in 39 out of 41 HPEs. However, they showed a positive bias relative to the radar estimates and exhibited errors in the spatial location of the heaviest precipitation. Our results indicate that convection-permitting model outputs can provide reliable climatological analyses of heavy precipitation patterns; conversely, flood forecasting requires the use of ensemble simulations to overcome the spatial location errors.


1997 ◽  
Vol 77 (4) ◽  
pp. 1906-1923 ◽  
Author(s):  
Karl R. Gegenfurtner ◽  
Daniel C. Kiper ◽  
Jonathan B. Levitt

Gegenfurtner, Karl R., Daniel C. Kiper, and Jonathan B. Levitt. Functional properties of neurons in macaque area V3. J. Neurophysiol. 77: 1906–1923, 1997. We investigated the functional properties of neurons in extrastriate area V3. V3 receives inputs from both magno- and parvocellular pathways and has prominent projections to both the middle temporal area (area MT) and V4. It may therefore represent an important site for integration and transformation of visual signals. We recorded the activity of single units representing the central 10° in anesthetized, paralyzed macaque monkeys. We measured each cell's spatial, temporal, chromatic, and motion properties with the use of a variety of stimuli. Results were compared with measurements made in V2 neurons at similar eccentricities. Similar to area V2, most of the neurons in our sample (80%) were orientation selective, and the distribution of orientation bandwidths was similar to that found in V2. Neurons in V3 preferred lower spatial and higher temporal frequencies than V2 neurons. Contrast thresholds of V3 neurons were extremely low. Achromatic contrast sensitivity was much higher than in V2, and similar to that found in MT. About 40% of all neurons showed strong directional selectivity. We did not find strongly directional cells in layer 4 of V3, the layer in which the bulk of V1 and V2 inputs terminate. This property seems to be developed within area V3. An analysis of the responses of directionally selective cells to plaid patterns showed that in area V3, as in MT and unlike in V1 and V2, there exist cells sensitive to the motion of the plaid pattern rather than to that of the components. The exact proportion of cells classified as being selective to color depended to a large degree on the experiment and on the criteria used for classification. With the use of the same conditions as in a previous study of V2 cells, we found as many (54%) color-selective cells as in V2 (50%). Furthermore, the responses of V3 cells to colored sinusoidal gratings were well described by a linear combination of cone inputs. The two subpopulations of cells responsive to color and to motion overlapped to a large extent, and we found a significant proportion of cells that gave reliable and directional responses to drifting isoluminant gratings. Our results show that there is a significant interaction between color and motion processing in area V3, and that V3 cells exhibit the more complex motion properties typically observed at later stages of visual processing.


2020 ◽  
Vol 11 (11) ◽  
pp. 2529-2540 ◽  
Author(s):  
Weili Ding ◽  
Bo Hu ◽  
Han Liu ◽  
Xinming Wang ◽  
Xiangsheng Huang

Abstract The use of skeleton data for human posture recognition is a key research topic in the human-computer interaction field. To improve the accuracy of human posture recognition, a new algorithm based on multiple features and rule learning is proposed in this paper. Firstly, a 219-dimensional vector that includes angle features and distance features is defined. Specifically, the angle and distance features are defined in terms of the local relationship between joints and the global spatial location of joints. Then, during human posture classification, the rule learning method is used together with the Bagging and random subspace methods to create different samples and features for improved classification performance of sub-classifiers for different samples. Finally, the performance of our proposed algorithm is evaluated on four human posture datasets. The experimental results show that our algorithm can recognize many kinds of human postures effectively, and the results obtained by the rule-based learning method are of higher interpretability than those by traditional machine learning methods and CNNs.


2020 ◽  
Author(s):  
Efrat Morin ◽  
Moshe Armon ◽  
Francesco Marra ◽  
Yehouda Enzel ◽  
Dorita Rostkier-Edelstein

<p>Heavy precipitation events (HPEs) can lead to natural hazards (floods, debris flows) and contribute to water resources. Spatiotemporal rainfall patterns govern the hydrological, geomorphological and societal effects of HPEs. Thus, a correct characterization and prediction of rainfall patterns is crucial for coping with these events. However, information from rain gauges suitable for these goals is generally limited due to the sparseness of the networks, especially in the presence of sharp climatic gradients and small precipitating systems. Forecasting HPEs depends on the ability of weather models to generate credible rainfall patterns. In this study we characterize rainfall patterns during HPEs based on high-resolution weather radar data and evaluate the performance of a high-resolution (1 km<sup>2</sup>), convection-permitting Weather Research and Forecasting (WRF) model in simulating these patterns. We identified 41 HPEs in the eastern Mediterranean from a 24-year long radar record using local thresholds based on quantiles for different durations, classified these events into two synoptic systems, and ran model simulations for them. For most durations, HPEs near the coastline were characterized by the highest rain intensities; however, for short storm durations, the highest rain intensities were characterized for the inland desert. During the rainy season, center of mass of the rain field progresses from the sea inland. Rainfall during HPEs is highly localized in both space (<10 km decorrelation distance) and time (<5 min). WRF model simulations accurately generate the structure and location of the rain fields in 39 out of 41 HPEs. However, they showed a positive bias relative to the radar estimates and exhibited errors in the spatial location of the heaviest precipitation. Our results indicate that convection-permitting model outputs can provide reliable climatological analyses of heavy precipitation patterns; conversely, flood forecasting requires the use of ensemble simulations to overcome the spatial location errors.</p>


2016 ◽  
Vol 113 (22) ◽  
pp. E3140-E3149 ◽  
Author(s):  
Corey M. Ziemba ◽  
Jeremy Freeman ◽  
J. Anthony Movshon ◽  
Eero P. Simoncelli

As information propagates along the ventral visual hierarchy, neuronal responses become both more specific for particular image features and more tolerant of image transformations that preserve those features. Here, we present evidence that neurons in area V2 are selective for local statistics that occur in natural visual textures, and tolerant of manipulations that preserve these statistics. Texture stimuli were generated by sampling from a statistical model, with parameters chosen to match the parameters of a set of visually distinct natural texture images. Stimuli generated with the same statistics are perceptually similar to each other despite differences, arising from the sampling process, in the precise spatial location of features. We assessed the accuracy with which these textures could be classified based on the responses of V1 and V2 neurons recorded individually in anesthetized macaque monkeys. We also assessed the accuracy with which particular samples could be identified, relative to other statistically matched samples. For populations of up to 100 cells, V1 neurons supported better performance in the sample identification task, whereas V2 neurons exhibited better performance in texture classification. Relative to V1, the responses of V2 show greater selectivity and tolerance for the representation of texture statistics.


2008 ◽  
Vol 107 (2) ◽  
pp. 323-335 ◽  
Author(s):  
Franco Bertossa ◽  
Marco Besa ◽  
Roberto Ferrari ◽  
Francesca Ferri

Does consciousness have a spatial “location” that can be scientifically investigated? Using a novel phenomenological method, when people are encouraged to explore the question introspectively they not only can make sense of the idea of their consciousness being “located,” but will readily indicate its exact position inside the head. The method, based on Francisco J. Varela's work, involves a structured interview led by an expert mediator in which preliminary questions are asked or untrained volunteers about the location of objects and body parts, and then they are questioned about the location from which they are experiencing these objects. 83% of volunteers located with confidence a precise position for the I-that-perceives in the temporal area of the head centred midway behind the eyes. The same results were obtained with blind subjects (congenitally or later) and with non-Westerners. The significance of this subjective source of the experience of the location of perception is discussed linking it to neurological correlates of self-referred conscious activities and of conscious awareness in memory. Further investigations are suggested with trained volunteers and with individuals with psychiatric disorders.


Sign in / Sign up

Export Citation Format

Share Document