Research on binocular stereo video attention model based on human visual system

Author(s):  
Ruan Ruolin ◽  
Xia Yang ◽  
Yin Liming ◽  
Wu Aixia ◽  
Shen Jing
Author(s):  
Wen-Han Zhu ◽  
Wei Sun ◽  
Xiong-Kuo Min ◽  
Guang-Tao Zhai ◽  
Xiao-Kang Yang

AbstractObjective image quality assessment (IQA) plays an important role in various visual communication systems, which can automatically and efficiently predict the perceived quality of images. The human eye is the ultimate evaluator for visual experience, thus the modeling of human visual system (HVS) is a core issue for objective IQA and visual experience optimization. The traditional model based on black box fitting has low interpretability and it is difficult to guide the experience optimization effectively, while the model based on physiological simulation is hard to integrate into practical visual communication services due to its high computational complexity. For bridging the gap between signal distortion and visual experience, in this paper, we propose a novel perceptual no-reference (NR) IQA algorithm based on structural computational modeling of HVS. According to the mechanism of the human brain, we divide the visual signal processing into a low-level visual layer, a middle-level visual layer and a high-level visual layer, which conduct pixel information processing, primitive information processing and global image information processing, respectively. The natural scene statistics (NSS) based features, deep features and free-energy based features are extracted from these three layers. The support vector regression (SVR) is employed to aggregate features to the final quality prediction. Extensive experimental comparisons on three widely used benchmark IQA databases (LIVE, CSIQ and TID2013) demonstrate that our proposed metric is highly competitive with or outperforms the state-of-the-art NR IQA measures.


Author(s):  
Erik Cuevas ◽  
Alma Rodríguez ◽  
Avelina Alejo-Reyes ◽  
Carolina Del-Valle-Soto

2012 ◽  
Vol 220-223 ◽  
pp. 2204-2207
Author(s):  
Wen Gang Feng ◽  
Xipin Zhou

A novel bionic, middle semantic object annotation framework is presented in this paper. Moreover, we build the model based on the perception as defined by the human visual system. At first, the super-pixel is used to represent the images, and conditional random field could label each of the super-pixels, which means annotating the different classes of objects. In next step, on the basis of the previous result, image pyramid is used to represent the image, and get the sub-region of some objects of the same class. After extracting descriptor to represent the patches, all the patches are projected to a manifold, which could annotate the different views of objects from the same class. Experiments show that the bionic, middle semantic object annotation framework could obtain superior results with respect to accuracy.


2021 ◽  
Author(s):  
M.C.J.M. Vissenberg ◽  
M. Perz ◽  
M.A.H. Donners ◽  
D. Sekulovski

Conventional discomfort glare measures are based on glare source properties like luminous intensity or luminance and typically are valid only to specific situations and to specific types of light sources. For instance, the Unified Glare Rating (UGR) is intended for indoor lighting conditions with medium-sized glare sources, whereas another class of discomfort glare measures is specifically devoted to car headlamps. Recently, CIE TC 3-57 started with the aim to develop a more generic glare sensation model based on the human visual system. We present an example of such a model, including a detailed description of aspects like pupil constriction, retinal image formation, photoreceptor response and adaptation, receptive field-type filtering in the retina, and neural spatial summation. The linear correlation of the model to UGR in an indoor setting, and to subjective glare responses in an outdoor-like setting indicate that the human-visual-system-based model may indeed be considered generic.


1997 ◽  
Vol 9 (2) ◽  
pp. 132-134
Author(s):  
Hitomi Koike ◽  
◽  
Masanori Idesawa ◽  

The human visual system perceives three-dimensional structures by using all kinds of means, such as perspective, binocular parallax, motion parallax, texture gradient, etc[2]. Among these, binocular stereo viewing based on binocular parallax is one of the most important means in perceiving the three-dimensional structures of objects. In recent years, in the field of computer graphics, the three-dimensional presentation method based on binocular stereo viewing is widely used. In research on the elucidation of the human visual mechanism, studies with the use of three--dimensional presentation methods based on binocular stereo viewing as applied to computer graphics are being , carried out actively. These studies have led to new discoveries which were never anticipated in the past, and to new knowledge concerning the human visual system. For three-dimensional presentation based on binocular viewing, various methods are used; the anaglyph method is one of them (Fig.1). According to the anaglyph method, the images of the left eye and the right eye are depicted in different colors and are observed after they are separated by means of filters corresponding to these colors. For this reason, the anaglyph method is an extremely effective one for conducting a simple experiment on binocular stereo viewing[1]. However, in the observation of moving objects which the present authors have recently conducted, a strange phenomenon was observed that a triangle rotating on a plane with a uniform depth is perceived as if it were moving in a slanted way. When this phenomenon was examined, it was confirmed that it was due to the Pulfrich effect based on the difference in brightness between red and blue colors[4]. This fact indicates that it is necessary to pay sufficient attention to the existence of this phenomenon as long as the anaglyph method is used for the presentation and observation of moving objects. The present report is intended to introduce several of phenomena observed in the course of the study on the cause of this phenomenon.


Sign in / Sign up

Export Citation Format

Share Document