A Statistical Approach to Human-Like Visual Attention and Saliency Detection for Robot Vision: Application to Wildland Fires’ Detection

Author(s):  
Viachaslau Kachurka ◽  
Kurosh Madani ◽  
Christophe Sabourin ◽  
Vladimir Golovko
Author(s):  
Annalisa Appice ◽  
Angelo Cannarile ◽  
Antonella Falini ◽  
Donato Malerba ◽  
Francesca Mazzia ◽  
...  

AbstractSaliency detection mimics the natural visual attention mechanism that identifies an imagery region to be salient when it attracts visual attention more than the background. This image analysis task covers many important applications in several fields such as military science, ocean research, resources exploration, disaster and land-use monitoring tasks. Despite hundreds of models have been proposed for saliency detection in colour images, there is still a large room for improving saliency detection performances in hyperspectral imaging analysis. In the present study, an ensemble learning methodology for saliency detection in hyperspectral imagery datasets is presented. It enhances saliency assignments yielded through a robust colour-based technique with new saliency information extracted by taking advantage of the abundance of spectral information on multiple hyperspectral images. The experiments performed with the proposed methodology provide encouraging results, also compared to several competitors.


2011 ◽  
Vol 2011 ◽  
pp. 1-15
Author(s):  
Ismail Ktata ◽  
Fakhreddine Ghaffari ◽  
Bertrand Granado ◽  
Mohamed Abid

Applications executed on embedded systems require dynamicity and flexibility according to user and environment needs. Dynamically reconfigurable architecture could satisfy these requirements but needs efficient mechanisms to be managed efficiently. In this paper, we propose a dedicated application modeling technique that helps to establish a predictive scheduling approach to manage a dynamically reconfigurable architecture named OLLAF. OLLAF is designed to support an operating system that deals with complex embedded applications. This model will be used for a predictive scheduling based on an early estimation of our application dynamicity. A vision system of a mobile robot application has been used to validate the presented model and scheduling approach. We have demonstrated that with our modeling we can realize an efficient predictive scheduling on a robot vision application with a mean error of 6.5%.


2014 ◽  
Vol 658 ◽  
pp. 678-683 ◽  
Author(s):  
Cristian Pop ◽  
Sanda Margareta Grigorescu ◽  
Erwin Christian Lovasz

This paper presents a robot vision application, implemented in MATLAB working environment, developed for feature-based object recognition, object sorting and manipulation, based on shape classification and its pose calculus for proper positioning. The application described in this article, designed to detect, identify, classify and manipulate objects is based on previous robot vision applications that are presented in more detail in [1]. The idea underlying the mentioned applications is to determine the type, position and orientation of the work pieces (in those cases different types of bearings). Taking it further, in the presented application, objects that show shape with a gradual level of complexity are used. For this reason pattern recognition are discriminated by training a two layers neural network. The network is presented and also the input and output vectors.


2013 ◽  
Vol 333-335 ◽  
pp. 1171-1174
Author(s):  
Fan Hui ◽  
Ren Lu ◽  
Jin Jiang Li

Drawing on the suvey of visual attention degree and its significance in psychology and physiology , in recent years, researchers have proposed a lot of visual attention model and algorithms, such as Itti model and many saliency detection algorithms. And in recent years, the researchers applied the visual attention of this technology in a lot of directions, such as a significant regional shifts and visual tracing detection model based on network loss, for video quality evaluation. This paper summarizes the various algorithms and its application of visual attention and its significance.


2015 ◽  
Vol 734 ◽  
pp. 596-599 ◽  
Author(s):  
Deng Ping Fan ◽  
Juan Wang ◽  
Xue Mei Liang

The Context-Aware Saliency (CA) model—is a new model used for saliency detection—has strong limitations: It is very time consuming. This paper improved the shortcoming of this model namely Fast-CA and proposed a novel framework for image retrieval and image representation. The proposed framework derives from Fast-CA and multi-texton histogram. And the mechanisms of visual attention are simulated and used to detect saliency areas of an image. Furthermore, a very simple threshold method is adopted to detect the dominant saliency areas. Color, texture and edge features are further extracted to describe image content within the dominant saliency areas, and then those features are integrated into one entity as image representation, where image representation is so called the dominant saliency areas histogram (DSAH) and used for image retrieval. Experimental results indicate that our algorithm outperform multi-texton histogram (MTH) and edge histogram descriptors (EHD) on Corel dataset with 10000 natural images.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Meng Li ◽  
Yi Zhan ◽  
Lidan Zhang

We present a nonlocal variational model for saliency detection from still images, from which various features for visual attention can be detected by minimizing the energy functional. The associated Euler-Lagrange equation is a nonlocalp-Laplacian type diffusion equation with two reaction terms, and it is a nonlinear diffusion. The main advantage of our method is that it provides flexible and intuitive control over the detecting procedure by the temporal evolution of the Euler-Lagrange equation. Experimental results on various images show that our model can better make background details diminish eventually while luxuriant subtle details in foreground are preserved very well.


Author(s):  
Jila Hosseinkhani ◽  
Chris Joslin

A key factor in designing saliency detection algorithms for videos is to understand how different visual cues affect the human perceptual and visual system. To this end, this article investigated the bottom-up features including color, texture, and motion in video sequences for a one-by-one scenario to provide a ranking system stating the most dominant circumstances for each feature. In this work, it is considered the individual features and various visual saliency attributes investigated under conditions in which the authors had no cognitive bias. Human cognition refers to a systematic pattern of perceptual and rational judgments and decision-making actions. First, this paper modeled the test data as 2D videos in a virtual environment to avoid any cognitive bias. Then, this paper performed an experiment using human subjects to determine which colors, textures, motion directions, and motion speeds attract human attention more. The proposed benchmark ranking system of salient visual attention stimuli was achieved using an eye tracking procedure.


Author(s):  
Amirhossein Jamalian ◽  
Fred H. Hamker

A rich stream of visual data enters the cameras of a typical artificial vision system (e.g., a robot) and considering the fact that processing this volume of data in real-rime is almost impossible, a clever mechanism is required to reduce the amount of trivial visual data. Visual Attention might be the solution. The idea is to control the information flow and thus to improve vision by focusing the resources merely on some special aspects instead of the whole visual scene. However, does attention only speed-up processing or can the understanding of human visual attention provide additional guidance for robot vision research? In this chapter, first, some basic concepts of the primate visual system and visual attention are introduced. Afterward, a new taxonomy of biologically-inspired models of attention, particularly those that are used in robotics applications (e.g., in object detection and recognition) is given and finally, future research trends in modelling of visual attention and its applications are highlighted.


Sign in / Sign up

Export Citation Format

Share Document