Monocular and Binocular Interactions Oriented Deformable Convolutional Networks for Blind Quality Assessment of Stereoscopic Omnidirectional Images

Author(s):  
Xiongli Chai ◽  
Feng Shao ◽  
Qiuping Jiang ◽  
Xiangchao Meng ◽  
Yo-Sung Ho
2020 ◽  
Vol 14 (1) ◽  
pp. 209-221 ◽  
Author(s):  
Jia Li ◽  
Yifan Zhao ◽  
Weihua Ye ◽  
Kaiwen Yu ◽  
Shiming Ge

Author(s):  
Huiyu Duan ◽  
Guangtao Zhai ◽  
Xiongkuo Min ◽  
Yucheng Zhu ◽  
Yi Fang ◽  
...  

2020 ◽  
Vol 528 ◽  
pp. 205-218 ◽  
Author(s):  
Wei Zhou ◽  
Qiuping Jiang ◽  
Yuwang Wang ◽  
Zhibo Chen ◽  
Weiping Li

2020 ◽  
Vol 2020 (9) ◽  
pp. 287-1-287-11
Author(s):  
Abderrezzaq Sendjasni ◽  
Mohamed-Chaker Larabi ◽  
Faouzi Alaya Cheikh

Subjective quality assessment remains the most reliable way to evaluate image quality while being tedious and money consuming. Therefore, objective quality evaluation ensures a trade-off by providing a computational approach for predicting image quality. Even though a large literature exists for 2D image and video quality evaluation, 360-degree images quality is still under-explored. One can question the efficiency of 2D quality metrics on such a new type of content. To this end, we propose to study the possible improvement of well-known 2D quality metrics using important features related to 360-degree content, i.e. equator bias and visual saliency. The performance evaluation is conducted on two databases containing various distortion types. The obtained results show a slight improvement of the performance highlighting some problems inherently related to both the database content and the subjective evaluation approach used to obtain the observers’ quality scores.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5322
Author(s):  
Jiu Jiang ◽  
Xianpei Wang ◽  
Bowen Li ◽  
Meng Tian ◽  
Hongtai Yao

Over the past few decades, video quality assessment (VQA) has become a valuable research field. The perception of in-the-wild video quality without reference is mainly challenged by hybrid distortions with dynamic variations and the movement of the content. In order to address this barrier, we propose a no-reference video quality assessment (NR-VQA) method that adds the enhanced awareness of dynamic information to the perception of static objects. Specifically, we use convolutional networks with different dimensions to extract low-level static-dynamic fusion features for video clips and subsequently implement alignment, followed by a temporal memory module consisting of recurrent neural networks branches and fully connected (FC) branches to construct feature associations in a time series. Meanwhile, in order to simulate human visual habits, we built a parametric adaptive network structure to obtain the final score. We further validated the proposed method on four datasets (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC) to test the generalization ability. Extensive experiments have demonstrated that the proposed method not only outperforms other NR-VQA methods in terms of overall performance of mixed datasets but also achieves competitive performance in individual datasets compared to the existing state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document