Single-frame super-resolution by a cortex based mechanism using high level visual features in natural images

Author(s):  
O. Kursun ◽  
O. Favorov
2019 ◽  
Author(s):  
Daniel Lindh ◽  
Ilja G. Sligte ◽  
Sara Assecondi ◽  
Kimron L. Shapiro ◽  
Ian Charest

AbstractConscious perception is crucial for adaptive behaviour yet access to consciousness varies for different types of objects. The visual system comprises regions with widely distributed category information and exemplar-level representations that cluster according to category. Does this categorical organisation in the brain provide insight into object-specific access to consciousness? We address this question using the Attentional Blink (AB) approach with visual objects as targets. We find large differences across categories in the AB then employ activation patterns extracted from a deep convolutional neural network (DCNN) to reveal that these differences depend on mid- to high-level, rather than low-level, visual features. We further show that these visual features can be used to explain variance in performance across trials. Taken together, our results suggest that the specific organisation of the higher-tier visual system underlies important functions relevant for conscious perception of differing natural images.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Daniel Lindh ◽  
Ilja G. Sligte ◽  
Sara Assecondi ◽  
Kimron L. Shapiro ◽  
Ian Charest

Abstract Conscious perception is crucial for adaptive behaviour yet access to consciousness varies for different types of objects. The visual system comprises regions with widely distributed category information and exemplar-level representations that cluster according to category. Does this categorical organisation in the brain provide insight into object-specific access to consciousness? We address this question using the Attentional Blink approach with visual objects as targets. We find large differences across categories in the attentional blink. We then employ activation patterns extracted from a deep convolutional neural network to reveal that these differences depend on mid- to high-level, rather than low-level, visual features. We further show that these visual features can be used to explain variance in performance across trials. Taken together, our results suggest that the specific organisation of the higher-tier visual system underlies important functions relevant for conscious perception of differing natural images.


Sensors ◽  
2016 ◽  
Vol 16 (11) ◽  
pp. 1836 ◽  
Author(s):  
Xiwei Huang ◽  
Yu Jiang ◽  
Xu Liu ◽  
Hang Xu ◽  
Zhi Han ◽  
...  

Author(s):  
Zewen Xu ◽  
Zheng Rong ◽  
Yihong Wu

AbstractIn recent years, simultaneous localization and mapping in dynamic environments (dynamic SLAM) has attracted significant attention from both academia and industry. Some pioneering work on this technique has expanded the potential of robotic applications. Compared to standard SLAM under the static world assumption, dynamic SLAM divides features into static and dynamic categories and leverages each type of feature properly. Therefore, dynamic SLAM can provide more robust localization for intelligent robots that operate in complex dynamic environments. Additionally, to meet the demands of some high-level tasks, dynamic SLAM can be integrated with multiple object tracking. This article presents a survey on dynamic SLAM from the perspective of feature choices. A discussion of the advantages and disadvantages of different visual features is provided in this article.


PLoS ONE ◽  
2012 ◽  
Vol 7 (6) ◽  
pp. e37575 ◽  
Author(s):  
Gesche M. Huebner ◽  
Karl R. Gegenfurtner

10.1038/nn866 ◽  
2002 ◽  
Vol 5 (7) ◽  
pp. 629-630 ◽  
Author(s):  
Guillaume A. Rousselet ◽  
Michèle Fabre-Thorpe ◽  
Simon J. Thorpe

2021 ◽  
Author(s):  
Marek A. Pedziwiatr ◽  
Elisabeth von dem Hagen ◽  
Christoph Teufel

Humans constantly move their eyes to explore the environment and obtain information. Competing theories of gaze guidance consider the factors driving eye movements within a dichotomy between low-level visual features and high-level object representations. However, recent developments in object perception indicate a complex and intricate relationship between features and objects. Specifically, image-independent object-knowledge can generate objecthood by dynamically reconfiguring how feature space is carved up by the visual system. Here, we adopt this emerging perspective of object perception, moving away from the simplifying dichotomy between features and objects in explanations of gaze guidance. We recorded eye movements in response to stimuli that appear as meaningless patches on initial viewing but are experienced as coherent objects once relevant object-knowledge has been acquired. We demonstrate that gaze guidance differs substantially depending on whether observers experienced the same stimuli as meaningless patches or organised them into object representations. In particular, fixations on identical images became object-centred, less dispersed, and more consistent across observers once exposed to relevant prior object-knowledge. Observers' gaze behaviour also indicated a shift from exploratory information-sampling to a strategy of extracting information mainly from selected, object-related image areas. These effects were evident from the first fixations on the image. Importantly, however, eye-movements were not fully determined by object representations but were best explained by a simple model that integrates image-computable features and high-level, knowledge-dependent object representations. Overall, the results show how information sampling via eye-movements in humans is guided by a dynamic interaction between image-computable features and knowledge-driven perceptual organisation.


Sign in / Sign up

Export Citation Format

Share Document