Parallel approach to expedite morphological feature extraction of remote sensing images for CBIR system

Author(s):  
Sandeep Kumar ◽  
Swati Jain ◽  
Tanish Zaveri
2020 ◽  
Vol 13 (1) ◽  
pp. 71
Author(s):  
Zhiyong Xu ◽  
Weicun Zhang ◽  
Tianxiang Zhang ◽  
Jiangyun Li

Semantic segmentation is a significant method in remote sensing image (RSIs) processing and has been widely used in various applications. Conventional convolutional neural network (CNN)-based semantic segmentation methods are likely to lose the spatial information in the feature extraction stage and usually pay little attention to global context information. Moreover, the imbalance of category scale and uncertain boundary information meanwhile exists in RSIs, which also brings a challenging problem to the semantic segmentation task. To overcome these problems, a high-resolution context extraction network (HRCNet) based on a high-resolution network (HRNet) is proposed in this paper. In this approach, the HRNet structure is adopted to keep the spatial information. Moreover, the light-weight dual attention (LDA) module is designed to obtain global context information in the feature extraction stage and the feature enhancement feature pyramid (FEFP) structure is promoted and employed to fuse the contextual information of different scales. In addition, to achieve the boundary information, we design the boundary aware (BA) module combined with the boundary aware loss (BAloss) function. The experimental results evaluated on Potsdam and Vaihingen datasets show that the proposed approach can significantly improve the boundary and segmentation performance up to 92.0% and 92.3% on overall accuracy scores, respectively. As a consequence, it is envisaged that the proposed HRCNet model will be an advantage in remote sensing images segmentation.


2020 ◽  
Vol 9 (4) ◽  
pp. 256 ◽  
Author(s):  
Liguo Weng ◽  
Yiming Xu ◽  
Min Xia ◽  
Yonghong Zhang ◽  
Jia Liu ◽  
...  

Changes on lakes and rivers are of great significance for the study of global climate change. Accurate segmentation of lakes and rivers is critical to the study of their changes. However, traditional water area segmentation methods almost all share the following deficiencies: high computational requirements, poor generalization performance, and low extraction accuracy. In recent years, semantic segmentation algorithms based on deep learning have been emerging. Addressing problems associated to a very large number of parameters, low accuracy, and network degradation during training process, this paper proposes a separable residual SegNet (SR-SegNet) to perform the water area segmentation using remote sensing images. On the one hand, without compromising the ability of feature extraction, the problem of network degradation is alleviated by adding modified residual blocks into the encoder, the number of parameters is limited by introducing depthwise separable convolutions, and the ability of feature extraction is improved by using dilated convolutions to expand the receptive field. On the other hand, SR-SegNet removes the convolution layers with relatively more convolution kernels in the encoding stage, and uses the cascading method to fuse the low-level and high-level features of the image. As a result, the whole network can obtain more spatial information. Experimental results show that the proposed method exhibits significant improvements over several traditional methods, including FCN, DeconvNet, and SegNet.


2019 ◽  
Vol 8 (3) ◽  
pp. 8881-8884

These are the days where we are very rich in information and poor in data. This is very true in case of image data. Whether it is the case of normal images or satellite images, the image collection is very huge but utilizing those images is of least concern. Extracting features from big images is a very challenging and compute intensive task but if we realize it, it will be very fruitful. CBIR (Content Based Image Retrieval) when used with HRRS (High Resolution Remote Sensing) images will yield with effective data.


Author(s):  
Akey Sungheetha ◽  
Rajesh Sharma R

Over the last decade, remote sensing technology has advanced dramatically, resulting in significant improvements on image quality, data volume, and application usage. These images have essential applications since they can help with quick and easy interpretation. Many standard detection algorithms fail to accurately categorize a scene from a remote sensing image recorded from the earth. A method that uses bilinear convolution neural networks to produce a lessweighted set of models those results in better visual recognition in remote sensing images using fine-grained techniques. This proposed hybrid method is utilized to extract scene feature information in two times from remote sensing images for improved recognition. In layman's terms, these features are defined as raw, and only have a single defined frame, so they will allow basic recognition from remote sensing images. This research work has proposed a double feature extraction hybrid deep learning approach to classify remotely sensed image scenes based on feature abstraction techniques. Also, the proposed algorithm is applied to feature values in order to convert them to feature vectors that have pure black and white values after many product operations. The next stage is pooling and normalization, which occurs after the CNN feature extraction process has changed. This research work has developed a novel hybrid framework method that has a better level of accuracy and recognition rate than any prior model.


2020 ◽  
Vol 12 (2) ◽  
pp. 205 ◽  
Author(s):  
Moyang Wang ◽  
Kun Tan ◽  
Xiuping Jia ◽  
Xue Wang ◽  
Yu Chen

Information extraction from multi-sensor remote sensing images has increasingly attracted attention with the development of remote sensing sensors. In this study, a supervised change detection method, based on the deep Siamese convolutional network with hybrid convolutional feature extraction module (OB-DSCNH), has been proposed using multi-sensor images. The proposed architecture, which is based on dilated convolution, can extract the deep change features effectively, and the character of “network in network” increases the depth and width of the network while keeping the computational budget constant. The change decision model is utilized to detect changes through the difference of extracted features. Finally, a change detection map is obtained via an uncertainty analysis, which combines the multi-resolution segmentation, with the output from the Siamese network. To validate the effectiveness of the proposed approach, we conducted experiments on multispectral images collected by the ZY-3 and GF-2 satellites. Experimental results demonstrate that our proposed method achieves comparable and better performance than mainstream methods in multi-sensor images change detection.


Sign in / Sign up

Export Citation Format

Share Document