scholarly journals A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map

2015 ◽  
Vol 18 (4) ◽  
pp. 460-472
Author(s):  
Kyungjoo Cheoi
2020 ◽  
Vol 12 (5) ◽  
pp. 781 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yang Chen ◽  
Wenhai Xu

Infrared and visible image fusion technology provides many benefits for human vision and computer image processing tasks, including enriched useful information and enhanced surveillance capabilities. However, existing fusion algorithms have faced a great challenge to effectively integrate visual features from complex source images. In this paper, we design a novel infrared and visible image fusion algorithm based on visual attention technology, in which a special visual attention system and a feature fusion strategy based on the saliency maps are proposed. Special visual attention system first utilizes the co-occurrence matrix to calculate the image texture complication, which can select a particular modality to compute a saliency map. Moreover, we improved the iterative operator of the original visual attention model (VAM), a fair competition mechanism is designed to ensure that the visual feature in detail regions can be extracted accurately. For the feature fusion strategy, we use the obtained saliency map to combine the visual attention features, and appropriately enhance the tiny features to ensure that the weak targets can be observed. Different from the general fusion algorithm, the proposed algorithm not only preserve the interesting region but also contain rich tiny details, which can improve the visual ability of human and computer. Moreover, experimental results in complicated ambient conditions show that the proposed algorithm in this paper outperforms state-of-the-art algorithms in both qualitative and quantitative evaluations, and this study can extend to the field of other-type image fusion.


Author(s):  
Stefan Treue

The allocation of selective visual attention to a particular region of visual space has been attention’s most-studied variant. But attention can also be allocated to features, such as a particular colour or direction of motion. Studies from the visual cortex of rhesus monkeys have revealed a gain modulation across visual space that enhances the response of neurons that show a preference for the attended feature and a reduced responsiveness of those neurons tuned to the opposite feature. Such studies have also provided evidence for object-based attention, where the attentional enhancement of a neural representation affects the complex amalgamation of features that make up an object. All these forms of visual attention together create an integrated saliency map or priority map, that is, an integrated representation of relative stimulus strength and behavioural relevance across visual space that underlies our perception of the environment.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


2008 ◽  
Vol 12 (5) ◽  
pp. 182-186 ◽  
Author(s):  
Barbara G. Shinn-Cunningham

Sign in / Sign up

Export Citation Format

Share Document