natural scene
Recently Published Documents


TOTAL DOCUMENTS

864
(FIVE YEARS 215)

H-INDEX

46
(FIVE YEARS 8)

2022 ◽  
Vol 54 (9) ◽  
pp. 1-36
Author(s):  
Xiongkuo Min ◽  
Ke Gu ◽  
Guangtao Zhai ◽  
Xiaokang Yang ◽  
Wenjun Zhang ◽  
...  

Screen content, which is often computer-generated, has many characteristics distinctly different from conventional camera-captured natural scene content. Such characteristic differences impose major challenges to the corresponding content quality assessment, which plays a critical role to ensure and improve the final user-perceived quality of experience (QoE) in various screen content communication and networking systems. Quality assessment of such screen content has attracted much attention recently, primarily because the screen content grows explosively due to the prevalence of cloud and remote computing applications in recent years, and due to the fact that conventional quality assessment methods can not handle such content effectively. As the most technology-oriented part of QoE modeling, image/video content/media quality assessment has drawn wide attention from researchers, and a large amount of work has been carried out to tackle the problem of screen content quality assessment. This article is intended to provide a systematic and timely review on this emerging research field, including (1) background of natural scene vs. screen content quality assessment; (2) characteristics of natural scene vs. screen content; (3) overview of screen content quality assessment methodologies and measures; (4) relevant benchmarks and comprehensive evaluation of the state-of-the-art; (5) discussions on generalizations from screen content quality assessment to QoE assessment, and other techniques beyond QoE assessment; and (6) unresolved challenges and promising future research directions. Throughout this article, we focus on the differences and similarities between screen content and conventional natural scene content. We expect that this review article shall provide readers with an overview of the background, history, recent progress, and future of the emerging screen content quality assessment research.


2021 ◽  
Author(s):  
Khalil Boukthir ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
habib dhahri ◽  
Adel Alimi

<div>- A novel approach is presented to reduced annotation based on Deep Active Learning for Arabic text detection in Natural Scene Images.</div><div>- A new Arabic text images dataset (7k images) using the Google Street View service named TSVD.</div><div>- A new semi-automatic method for generating natural scene text images from the streets.</div><div>- Training samples is reduced to 1/5 of the original training size on average.</div><div>- Much less training data to achieve better dice index : 0.84</div>


2021 ◽  
Author(s):  
Khalil Boukthir ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
habib dhahri ◽  
Adel Alimi

<div>- A novel approach is presented to reduced annotation based on Deep Active Learning for Arabic text detection in Natural Scene Images.</div><div>- A new Arabic text images dataset (7k images) using the Google Street View service named TSVD.</div><div>- A new semi-automatic method for generating natural scene text images from the streets.</div><div>- Training samples is reduced to 1/5 of the original training size on average.</div><div>- Much less training data to achieve better dice index : 0.84</div>


Author(s):  
Houda Gaddour ◽  
Slim Kanoun ◽  
Nicole Vincent

Text in scene images can provide useful and vital information for content-based image analysis. Therefore, text detection and script identification in images are an important task. In this paper, we propose a new method for text detection in natural scene images, particularly for Arabic text, based on a bottom-up approach where four principal steps can be highlighted. The detection of extremely stable and homogeneous regions of interest (ROIs) is based on the Color Stability and Homogeneity Regions (CSHR) proposed technique. These regions are then labeled as textual or non-textual ROI. This identification is based on a structural approach. The textual ROIs are grouped to constitute zones according to spatial relations between them. Finally, the textual or non-textual nature of the constituted zones is refined. This last identification is based on handcrafted features and on features built from a Convolutional Neural Network (CNN) after learning. The proposed method was evaluated on the databases used for text detection in natural scene images: the competitions organized in 2017 edition of the International Conference on Document Analysis and Recognition (ICDAR2017), the Urdu-text database and our Natural Scene Image Database for Arabic Text detection (NSIDAT) database. The obtained experimental results seem to be interesting.


2021 ◽  
Vol 13 (23) ◽  
pp. 4902
Author(s):  
Guanzhou Chen ◽  
Xiaoliang Tan ◽  
Beibei Guo ◽  
Kun Zhu ◽  
Puyun Liao ◽  
...  

Semantic segmentation is a fundamental task in remote sensing image analysis (RSIA). Fully convolutional networks (FCNs) have achieved state-of-the-art performance in the task of semantic segmentation of natural scene images. However, due to distinctive differences between natural scene images and remotely-sensed (RS) images, FCN-based semantic segmentation methods from the field of computer vision cannot achieve promising performances on RS images without modifications. In previous work, we proposed an RS image semantic segmentation framework SDFCNv1, combined with a majority voting postprocessing method. Nevertheless, it still has some drawbacks, such as small receptive field and large number of parameters. In this paper, we propose an improved semantic segmentation framework SDFCNv2 based on SDFCNv1, to conduct optimal semantic segmentation on RS images. We first construct a novel FCN model with hybrid basic convolutional (HBC) blocks and spatial-channel-fusion squeeze-and-excitation (SCFSE) modules, which occupies a larger receptive field and fewer network model parameters. We also put forward a data augmentation method based on spectral-specific stochastic-gamma-transform-based (SSSGT-based) during the model training process to improve generalizability of our model. Besides, we design a mask-weighted voting decision fusion postprocessing algorithm for image segmentation on overlarge RS images. We conducted several comparative experiments on two public datasets and a real surveying and mapping dataset. Extensive experimental results demonstrate that compared with the SDFCNv1 framework, our SDFCNv2 framework can increase the mIoU metric by up to 5.22% while only using about half of parameters.


2021 ◽  
Author(s):  
Jinfei Shen ◽  
Jianqi Li ◽  
Binfang Cao ◽  
Wei Li ◽  
Lijuan Huang ◽  
...  

2021 ◽  
Author(s):  
Paul Linton

Human 3D vision is thought to triangulate the size, distance, direction, and 3D shape of objects using vision from the two eyes. But all four of these capacities rely on the visual system knowing where the eyes are pointing. Dr Linton's experimental work on size and distance challenge this account, suggesting a purely retinal account of visual size and distance, and likely direction and 3D shape. This requires new accounts of visual scale and visual shape. For visual scale, he argues that observers rely on natural scene statistics to associate accentuated stereo depth (largely from horizontal disparities) with closer distances. This implies that depth / shape is resolved before size and distance. For visual shape, he argues that depth / shape from the two eyes is a solution to a different problem (rivalry eradication between two retinal images treated as if they are from the same viewpoint), rather than the visual system attempting to infer scene geometry (by treating the two retinal images as two different views of the same scene from different viewpoints). Dr Linton also draws upon his book, which questions whether other depth cues (perspective, shading, motion) really have any influence on this process.


Sign in / Sign up

Export Citation Format

Share Document