local cues
Recently Published Documents


TOTAL DOCUMENTS

75
(FIVE YEARS 8)

H-INDEX

19
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Jake T. Jordan ◽  
J. Tiago Gonçalves

AbstractHead-fixed linear treadmill tasks have been used to study hippocampal physiology in mice. Although some hippocampal neurons establish place fields along linear treadmills, it is not clear if the hippocampus is required for spatial memory on this task. Using a Designer Receptors Exclusively Activated by Designer Drugs (DREADDs) approach, we found that silencing hippocampal output on rewarded treadmill tasks impaired search for rewards signaled by spatial cues but did not impair search for rewards signaled by local cues, recapitulating findings from other behavior tasks. These findings serve to contextualize data on hippocampal physiology from mice performing this task.


AI ◽  
2020 ◽  
Vol 1 (4) ◽  
pp. 436-464
Author(s):  
Sudarshan Ramenahalli

Figure Ground Organization (FGO)-inferring spatial depth ordering of objects in a visual scene-involves determining which side of an occlusion boundary is figure (closer to the observer) and which is ground (further away from the observer). A combination of global cues, like convexity, and local cues, like T-junctions are involved in this process. A biologically motivated, feed forward computational model of FGO incorporating convexity, surroundedness, parallelism as global cues and spectral anisotropy (SA), T-junctions as local cues is presented. While SA is computed in a biologically plausible manner, the inclusion of T-Junctions is biologically motivated. The model consists of three independent feature channels, Color, Intensity and Orientation, but SA and T-Junctions are introduced only in the Orientation channel as these properties are specific to that feature of objects. The effect of adding each local cue independently and both of them simultaneously to the model with no local cues is studied. Model performance is evaluated based on figure-ground classification accuracy (FGCA) at every border location using the BSDS 300 figure-ground dataset. Each local cue, when added alone, gives statistically significant improvement in the FGCA of the model suggesting its usefulness as an independent FGO cue. The model with both local cues achieves higher FGCA than the models with individual cues, indicating SA and T-Junctions are not mutually contradictory. Compared to the model with no local cues, the feed-forward model with both local cues achieves ≥8.78% improvement in terms of FGCA.


Information ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 128 ◽  
Author(s):  
Marco Leo ◽  
Pierluigi Carcagnì ◽  
Pier Luigi Mazzeo ◽  
Paolo Spagnolo ◽  
Dario Cazzato ◽  
...  

This paper gives an overview of the cutting-edge approaches that perform facial cue analysis in the healthcare area. The document is not limited to global face analysis but it also concentrates on methods related to local cues (e.g., the eyes). A research taxonomy is introduced by dividing the face in its main features: eyes, mouth, muscles, skin, and shape. For each facial feature, the computer vision-based tasks aiming at analyzing it and the related healthcare goals that could be pursued are detailed.


2020 ◽  
Vol 23 (2) ◽  
pp. 367-387 ◽  
Author(s):  
Anastasia Morandi-Raikova ◽  
Giorgio Vallortigara ◽  
Uwe Mayer

2019 ◽  
Author(s):  
Arindam Bhattacharjee ◽  
Christoph Braun ◽  
Cornelius Schwarz

AbstractHumans are classically thought to use either spectral decomposition or averaging to identify vibrotactile signals. These are general purpose ‘global’ codes that require integration of the signal over long stretches of time. Natural vibrotactile signals, however, likely contain short signature events that can be detected and used for inference of textures, instantaneously, with minimal integration, suggesting a hitherto ignored ‘local code’. Here, by employing pulsatile stimuli and a change detection psychophysical task, we studied whether humans make use of local cues. We compared three local cues based on instantaneous skin position and its derivatives, as well as six global cues, calculated as summed powers (with exponents 1,2, and 3) of velocity and acceleration. Deliberate manipulation of pulse width and amplitude (local+global) as well as pulse frequency (global) allowed us to disentangle local from global codes. The results singled out maximum velocity, an instantaneous code, as a likely and dominant coding variable that humans rely on to perform the task. Comparing stimuli containing versus lacking local cues, demonstrated that performances exclusively using global cues are rather poor compared to situations where local ones are available as well. Our results are in line with the notion that humans not only do use local cues but that local cues may even play a dominant role in perception. Our results parallel previous results in rodents, pointing to the possibility that quite similar coding strategies evolved in whisker and finger tactile systems.Significance statementThe brain is believed to select coding symbols in sensory signals that would most efficiently convey functionally relevant information about the world. For instance, the visual system is widely believed to use spatially local features, like edge orientation, to delineate a visual scene. For the tactile system only global, general purpose coding schemes have been discussed so far. Based on the insight that moving contacts, characteristic for active touch, feature short-lived stick-slip events, frictional movements that transfer fair amounts of texture information, one should expect the brain to use a temporally local code, extracting and instantaneously analyzing short snippets of skin movement. Here, we provide the first analytical psychophysical evidence in humans that this indeed is the case.


2019 ◽  
Vol 11 (16) ◽  
pp. 1897 ◽  
Author(s):  
Yan Zhang ◽  
Weiguo Gong ◽  
Jingxi Sun ◽  
Weihong Li

How to efficiently utilize vast amounts of easily accessed aerial imageries is a critical challenge for researchers with the proliferation of high-resolution remote sensing sensors and platforms. Recently, the rapid development of deep neural networks (DNN) has been a focus in remote sensing, and the networks have achieved remarkable progress in image classification and segmentation tasks. However, the current DNN models inevitably lose the local cues during the downsampling operation. Additionally, even with skip connections, the upsampling methods cannot properly recover the structural information, such as the edge intersections, parallelism, and symmetry. In this paper, we propose the Web-Net, which is a nested network architecture with hierarchical dense connections, to handle these issues. We design the Ultra-Hierarchical Sampling (UHS) block to absorb and fuse the inter-level feature maps to propagate the feature maps among different levels. The position-wise downsampling/upsampling methods in the UHS iteratively change the shape of the inputs while preserving the number of their parameters, so that the low-level local cues and high-level semantic cues are properly preserved. We verify the effectiveness of the proposed Web-Net in the Inria Aerial Dataset and WHU Dataset. The results of the proposed Web-Net achieve an overall accuracy of 96.97% and an IoU (Intersection over Union) of 80.10% on the Inria Aerial Dataset, which surpasses the state-of-the-art SegNet 1.8% and 9.96%, respectively; the results on the WHU Dataset also support the effectiveness of the proposed Web-Net. Additionally, benefitting from the nested network architecture and the UHS block, the extracted buildings on the prediction maps are obviously sharper and more accurately identified, and even the building areas that are covered by shadows can also be correctly extracted. The verified results indicate that the proposed Web-Net is both effective and efficient for building extraction from high-resolution remote sensing images.


2019 ◽  
Vol 63 (1) ◽  
pp. 3-30
Author(s):  
Odette Scharenborg ◽  
Sofoklis Kakouros ◽  
Brechtje Post ◽  
Fanny Meunier

This paper investigates whether sentence accent detection in a non-native language is dependent on (relative) similarity between prosodic cues to accent between the non-native and the native language, and whether cross-linguistic differences in the use of local and more widely distributed (i.e., non-local) cues to sentence accent detection lead to differential effects of the presence of background noise on sentence accent detection in a non-native language. We compared Dutch, Finnish, and French non-native listeners of English, whose cueing and use of prosodic prominence is gradually further removed from English, and compared their results on a phoneme monitoring task in different levels of noise and a quiet condition to those of native listeners. Overall phoneme detection performance was high for the native and the non-native listeners, but deteriorated to the same extent in the presence of background noise. Crucially, relative similarity between the prosodic cues to sentence accent of one’s native language compared to that of a non-native language does not determine the ability to perceive and use sentence accent for speech perception in that non-native language. Moreover, proficiency in the non-native language is not a straightforward predictor of sentence accent perception performance, although high proficiency in a non-native language can seemingly overcome certain differences at the prosodic level between the native and non-native language. Instead, performance is determined by the extent to which listeners rely on local cues (English and Dutch) versus cues that are more distributed (Finnish and French), as more distributed cues survive the presence of background noise better.


2018 ◽  
Vol 112 (6) ◽  
pp. 731-744
Author(s):  
Andrea M. Nguyen ◽  
Tyler J. Ferro ◽  
Dianne T. V. Pawluk

Introduction This study presents and evaluates the use of a method using local cues to indicate perspective in tactile diagrams as compared to the current use of visual perspective methods. Methods Perspective for an object using local cues is represented with standard visual perspective lines but with the thickness of the lines varying as a function of depth away from the viewer. Performance of visually impaired study participants (that is, those who are blind or have low vision), using the new method and the standard visual perspective method, were compared as functions of: onset of vision loss of a participant, perspective method used, repetition, and object and perspective of an object presented. Results For the main task, the method used—Wald χ2(1, 585) = 7.147, p = 0.008—and the method-repetition interaction—Wald χ2(1, 585) = 4.272, p = 0.039—had significant effects. Participants performed better with our new method and there was a significant improvement for (only) this method between repetitions. Discussion The findings demonstrate that our new method improved the performance of users for tasks involving perspective on diagrams over the standard visual perspective method. The data also indicates that with more repetition, improvement could become even greater than observed during this study. Implications for practitioners Perspective frequently plays a critical role in aiding the understanding of questions in mathematics and science. Adding local cues to a standard perspective diagram shows promise in improving users' ability to interpret objects.


Sign in / Sign up

Export Citation Format

Share Document