BASI: a new index to extract built-up areas from high-resolution remote sensing images by visual attention model

2014 ◽  
Vol 5 (4) ◽  
pp. 305-314 ◽  
Author(s):  
Zhenfeng Shao ◽  
Yingjie Tian ◽  
Xiaole Shen
2015 ◽  
Vol 18 (2) ◽  
pp. 541-548 ◽  
Author(s):  
Xiaolu Song ◽  
Guojin He ◽  
Zhaoming Zhang ◽  
Tengfei Long ◽  
Yan Peng ◽  
...  

2019 ◽  
Vol 11 (8) ◽  
pp. 987 ◽  
Author(s):  
Yan Peng ◽  
Zhaoming Zhang ◽  
Guojin He ◽  
Mingyue Wei

An improved GrabCut method based on a visual attention model is proposed to extract rare-earth ore mining area information using high-resolution remote sensing images. The proposed method makes use of advantages of both the visual attention model and GrabCut method, and the visual attention model was referenced to generate a saliency map as the initial of the GrabCut method instead of manual initialization. Normalized Difference Vegetation Index (NDVI) was designed as a bound term added into the Energy Function of GrabCut to further improve the accuracy of the segmentation result. The proposed approach was employed to extract rare-earth ore mining areas in Dingnan County and Xunwu County, China, using GF-1 (GaoFen No.1 satellite launched by China) and ALOS (Advanced Land Observation Satellite) high-resolution remotely-sensed satellite data, and experimental results showed that FPR (False Positive Rate) and FNR (False Negative Rate) were, respectively, lower than 12.5% and 6.5%, and PA (Pixel Accuracy), MPA (Mean Pixel Accuracy), MIoU (Mean Intersection over Union), and FWIoU (frequency weighted intersection over union) all reached up to 90% in four experiments. Comparison results with traditional classification methods (such as Object-oriented CART (Classification and Regression Tree) and Object-oriented SVM (Support Vector Machine)) indicated the proposed method performed better for object boundary identification. The proposed method could be useful for accurate and automatic information extraction for rare-earth ore mining areas.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Linyi Li ◽  
Tingbao Xu ◽  
Yun Chen

In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.


Author(s):  
Z. F. Shao ◽  
W. X. Zhou ◽  
Q. M. Cheng

Low-level features tend to achieve unsatisfactory retrieval results in remote sensing image retrieval community because of the existence of semantic gap. In order to improve retrieval precision, visual attention model is used to extract salient objects from image according to their saliency. Then color and texture features are extracted from salient objects and regarded as feature vectors for image retrieval. Experimental results demonstrate that our method improves retrieval results and obtains higher precision.


2009 ◽  
Vol 20 (12) ◽  
pp. 3240-3253 ◽  
Author(s):  
Guo-Min ZHANG ◽  
Jian-Ping YIN ◽  
En ZHU ◽  
Ling MAO

Sign in / Sign up

Export Citation Format

Share Document