scholarly journals Intrinsic spatial knowledge about terrestrial ecology favors the tall for judging distance

2016 ◽  
Vol 2 (8) ◽  
pp. e1501070 ◽  
Author(s):  
Liu Zhou ◽  
Teng Leng Ooi ◽  
Zijiang J. He

Our sense of vision reliably directs and guides our everyday actions, such as reaching and walking. This ability is especially fascinating because the optical images of natural scenes that project into our eyes are insufficient to adequately form a perceptual space. It has been proposed that the brain makes up for this inadequacy by using its intrinsic spatial knowledge. However, it is unclear what constitutes intrinsic spatial knowledge and how it is acquired. We investigated this question and showed evidence of an ecological basis, which uses the statistical spatial relationship between the observer and the terrestrial environment, namely, the ground surface. We found that in dark and reduced-cue environments where intrinsic knowledge has a greater contribution, perceived target location is more accurate when referenced to the ground than to the ceiling. Furthermore, taller observers more accurately localized the target. Superior performance was also observed in the full-cue environment, even when we compensated for the observers’ heights by having the taller observer sit on a chair and the shorter observers stand on a box. Although fascinating, this finding dovetails with the prediction of the ecological hypothesis for intrinsic spatial knowledge. It suggests that an individual’s accumulated lifetime experiences of being tall and his or her constant interactions with ground-based objects not only determine intrinsic spatial knowledge but also endow him or her with an advantage in spatial ability in the intermediate distance range.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Chih-Wei Lin ◽  
Yu Hong ◽  
Jinfu Liu

Abstract Background Glioma is a malignant brain tumor; its location is complex and is difficult to remove surgically. To diagnosis the brain tumor, doctors can precisely diagnose and localize the disease using medical images. However, the computer-assisted diagnosis for the brain tumor diagnosis is still the problem because the rough segmentation of the brain tumor makes the internal grade of the tumor incorrect. Methods In this paper, we proposed an Aggregation-and-Attention Network for brain tumor segmentation. The proposed network takes the U-Net as the backbone, aggregates multi-scale semantic information, and focuses on crucial information to perform brain tumor segmentation. To this end, we proposed an enhanced down-sampling module and Up-Sampling Layer to compensate for the information loss. The multi-scale connection module is to construct the multi-receptive semantic fusion between encoder and decoder. Furthermore, we designed a dual-attention fusion module that can extract and enhance the spatial relationship of magnetic resonance imaging and applied the strategy of deep supervision in different parts of the proposed network. Results Experimental results show that the performance of the proposed framework is the best on the BraTS2020 dataset, compared with the-state-of-art networks. The performance of the proposed framework surpasses all the comparison networks, and its average accuracies of the four indexes are 0.860, 0.885, 0.932, and 1.2325, respectively. Conclusions The framework and modules of the proposed framework are scientific and practical, which can extract and aggregate useful semantic information and enhance the ability of glioma segmentation.


2021 ◽  
Author(s):  
Mo Shahdloo ◽  
Emin Çelik ◽  
Burcu A Urgen ◽  
Jack L. Gallant ◽  
Tolga Çukur

Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied human brain activity recorded via functional magnetic resonance imaging while subjects viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex, and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity towards target actions, and that tuning shifts are a general feature of conceptual representations in the brain.


2019 ◽  
Author(s):  
Dirk van Moorselaar ◽  
Heleen A. Slagter

AbstractIt is well known that attention can facilitate performance by top-down biasing processing of task-relevant information in advance. Recent findings from behavioral studies suggest that distractor inhibition is not under similar direct control, but strongly dependent on expectations derived from previous experience. Yet, how expectations about distracting information influence distractor inhibition at the neural level remains unclear. The current study addressed this outstanding question in three experiments in which search displays with repeating distractor or target locations across trials allowed observers to learn which location to selectively suppress or boost. Behavioral findings demonstrated that both distractor and target location learning resulted in more efficient search, as indexed by faster response times. Crucially, benefits of distractor learning were observed without target location foreknowledge, unaffected by the number of possible target locations, and could not be explained by priming alone. To determine how distractor location expectations facilitated performance, we applied a spatial encoding model to EEG data to reconstruct activity in neural populations tuned to the distractor or target location. Target location learning increased neural tuning to the target location in advance, indicative of preparatory biasing. This sensitivity increased after target presentation. By contrast, distractor expectations did not change preparatory spatial tuning. Instead, distractor expectations reduced distractor-specific processing, as reflected in the disappearance of the Pd ERP component, a neural marker of distractor inhibition, and decreased decoding accuracy. These findings suggest that the brain may no longer process expected distractors as distractors, once it has learned they can safely be ignored.Significance statementWe constantly try hard to ignore conspicuous events that distract us from our current goals. Surprisingly, and in contrast to dominant attention theories, ignoring distracting, but irrelevant events does not seem to be as flexible as is focusing our attention on those same aspects. Instead, distractor suppression appears to strongly rely on learned, context-dependent expectations. Here, we investigated how learning about upcoming distractors changes distractor processing and directly contrasted the underlying neural dynamics to target learning. We show that while target learning enhanced anticipatory sensory tuning, distractor learning only modulated reactive suppressive processing. These results suggest that expected distractors may no longer be considered distractors by the brain once it has learned that they can safely be ignored.


2007 ◽  
Vol 97 (1) ◽  
pp. 921-926 ◽  
Author(s):  
Mark T. Wallace ◽  
Barry E. Stein

Multisensory integration refers to the process by which the brain synthesizes information from different senses to enhance sensitivity to external events. In the present experiments, animals were reared in an altered sensory environment in which visual and auditory stimuli were temporally coupled but originated from different locations. Neurons in the superior colliculus developed a seemingly anomalous form of multisensory integration in which spatially disparate visual-auditory stimuli were integrated in the same way that neurons in normally reared animals integrated visual-auditory stimuli from the same location. The data suggest that the principles governing multisensory integration are highly plastic and that there is no a priori spatial relationship between stimuli from different senses that is required for their integration. Rather, these principles appear to be established early in life based on the specific features of an animal's environment to best adapt it to deal with that environment later in life.


2014 ◽  
Vol 136 (09) ◽  
pp. S3-S5 ◽  
Author(s):  
Neville Hogan

This article explains how robots can help people recover after neurological injury. The most successful robot-administered therapy to aid neuro-recovery is based on several principles of learning. A visual display indicates a target location to which the patient should attempt to move. The robot sets up a virtual channel between the current location of the patient’s limb and the target location. If the patient moves along that channel, no forces are experienced. However, if the patient’s motion deviates to either side of that channel, those aiming errors are permitted but resisted by a programmable damped spring. If the patient moves too slowly (or does not initiate movement at all), the back wall of the channel (the end at the patient’s starting location) moves smoothly towards the target location, nudging the patient to the target. Repeating this process with high intensity provides the stimulus and statistics for the brain to reacquire movement control and coordination. Passively moving a patient’s limbs may help improve joint mobility.


2020 ◽  
Vol 128 (10-11) ◽  
pp. 2665-2683 ◽  
Author(s):  
Grigorios G. Chrysos ◽  
Jean Kossaifi ◽  
Stefanos Zafeiriou

Abstract Conditional image generation lies at the heart of computer vision and conditional generative adversarial networks (cGAN) have recently become the method of choice for this task, owing to their superior performance. The focus so far has largely been on performance improvement, with little effort in making cGANs more robust to noise. However, the regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGANs unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue. Specifically, we augment the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold, even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and establish with both synthetic and real data the merits of our model. We perform a thorough experimental validation on large scale datasets for natural scenes and faces and observe that our model outperforms existing cGAN architectures by a large margin. We also empirically demonstrate the performance of our approach in the face of two types of noise (adversarial and Bernoulli).


NeuroImage ◽  
2020 ◽  
Vol 221 ◽  
pp. 117173
Author(s):  
Alexander M. Puckett ◽  
Mark M. Schira ◽  
Zoey J. Isherwood ◽  
Jonathan D. Victor ◽  
James A. Roberts ◽  
...  

2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.


2019 ◽  
Vol 12 (4) ◽  
pp. 466-480
Author(s):  
Li Na ◽  
Xiong Zhiyong ◽  
Deng Tianqi ◽  
Ren Kai

Purpose The precise segmentation of brain tumors is the most important and crucial step in their diagnosis and treatment. Due to the presence of noise, uneven gray levels, blurred boundaries and edema around the brain tumor region, the brain tumor image has indistinct features in the tumor region, which pose a problem for diagnostics. The paper aims to discuss these issues. Design/methodology/approach In this paper, the authors propose an original solution for segmentation using Tamura Texture and ensemble Support Vector Machine (SVM) structure. In the proposed technique, 124 features of each voxel are extracted, including Tamura texture features and grayscale features. Then, these features are ranked using the SVM-Recursive Feature Elimination method, which is also adopted to optimize the parameters of the Radial Basis Function kernel of SVMs. Finally, the bagging random sampling method is utilized to construct the ensemble SVM classifier based on a weighted voting mechanism to classify the types of voxel. Findings The experiments are conducted over a sample data set to be called BraTS2015. The experiments demonstrate that Tamura texture is very useful in the segmentation of brain tumors, especially the feature of line-likeness. The superior performance of the proposed ensemble SVM classifier is demonstrated by comparison with single SVM classifiers as well as other methods. Originality/value The authors propose an original solution for segmentation using Tamura Texture and ensemble SVM structure.


2019 ◽  
Vol 11 (21) ◽  
pp. 2542
Author(s):  
Huaping Xu ◽  
Yao Luo ◽  
Bo Yang ◽  
Zhaohong Li ◽  
Wei Liu

Tropospheric delays in spaceborne Interferometric Synthetic Aperture Radar (InSAR) can contaminate the measurement of small amplitude earth surface deformation. In this paper, a novel TXY-correlated method is proposed, where the main tropospheric delay components are jointly modeled in three dimensions, and then the long-scale and topography-correlated tropospheric delay components are corrected simultaneously. Moreover, the strategies of scale filtering and alternative iteration are employed to accurately retrieve all components of the joint model. Both the TXY-correlated method and the conventional phase-based methods are tested with a total of 25 TerraSAR-X/TanDEM-X images collected over the Chaobai River site and the Renhe Town of Beijing Shunyi District, where natural scenes and man-made targets are contained. A higher correction rate of tropospheric delays and a greater reduction in spatio-temporal standard deviations of time series displacement are observed after delay correction by the TXY-correlated method in both non-urban and urban areas, which demonstrate the superior performance of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document