scholarly journals Non-linear Sorenson–Dice Exemplar Image Inpainting Based Bayes Probability for Occlusion Removal in Remote Traffic Control

Author(s):  
P. L. Arun ◽  
R Mathusoothana S Kumar

AbstractOcclusion removal is a significant problem to be resolved in a remote traffic control system to enhance road safety. However, the conventional techniques do not recognize traffic signs well due to the vehicles are occluded. Besides occlusion removal was not performed in existing techniques with a less amount of time. In order to overcome such limitations, Non-linear Gaussian Bilateral Filtered Sorenson–Dice Exemplar Image Inpainting Based Bayes Conditional Probability (NGBFSEII-BCP) Method is proposed. Initially, a number of remote sensing images are taken as input from Highway Traffic Dataset. Then, the NGBFSEII-BCP method applies the Non-Linear Gaussian Bilateral Filtering (NGBF) algorithm for removing the noise pixels in input images. After preprocessing, the NGBFSEII-BCP method is used to remove the occlusion in the input images. Finally, NGBFSEII-BCP Method applies Bayes conditional probability to find operation status and thereby gets higher road safety using remote sensing images. The technique conducts the simulation evaluation using metrics such as peak signal to noise ratio, computational time, and detection accuracy. The simulation result illustrates that the NGBFSEII-BCP Method increases the detection accuracy by 20% and reduces the computation time by 32% as compared to state-of-the-art works.

Author(s):  
Leijin Long ◽  
Feng He ◽  
Hongjiang Liu

AbstractIn order to monitor the high-level landslides frequently occurring in Jinsha River area of Southwest China, and protect the lives and property safety of people in mountainous areas, the data of satellite remote sensing images are combined with various factors inducing landslides and transformed into landslide influence factors, which provides data basis for the establishment of landslide detection model. Then, based on the deep belief networks (DBN) and convolutional neural network (CNN) algorithm, two landslide detection models DBN and convolutional neural-deep belief network (CDN) are established to monitor the high-level landslide in Jinsha River. The influence of the model parameters on the landslide detection results is analyzed, and the accuracy of DBN and CDN models in dealing with actual landslide problems is compared. The results show that when the number of neurons in the DBN is 100, the overall error is the minimum, and when the number of learning layers is 3, the classification error is the minimum. The detection accuracy of DBN and CDN is 97.56% and 97.63%, respectively, which indicates that both DBN and CDN models are feasible in dealing with landslides from remote sensing images. This exploration provides a reference for the study of high-level landslide disasters in Jinsha River.


2019 ◽  
Vol 8 (9) ◽  
pp. 390 ◽  
Author(s):  
Kun Zheng ◽  
Mengfei Wei ◽  
Guangmin Sun ◽  
Bilal Anas ◽  
Yu Li

Vehicle detection based on very high-resolution (VHR) remote sensing images is beneficial in many fields such as military surveillance, traffic control, and social/economic studies. However, intricate details about the vehicle and the surrounding background provided by VHR images require sophisticated analysis based on massive data samples, though the number of reliable labeled training data is limited. In practice, data augmentation is often leveraged to solve this conflict. The traditional data augmentation strategy uses a combination of rotation, scaling, and flipping transformations, etc., and has limited capabilities in capturing the essence of feature distribution and proving data diversity. In this study, we propose a learning method named Vehicle Synthesis Generative Adversarial Networks (VS-GANs) to generate annotated vehicles from remote sensing images. The proposed framework has one generator and two discriminators, which try to synthesize realistic vehicles and learn the background context simultaneously. The method can quickly generate high-quality annotated vehicle data samples and greatly helps in the training of vehicle detectors. Experimental results show that the proposed framework can synthesize vehicles and their background images with variations and different levels of details. Compared with traditional data augmentation methods, the proposed method significantly improves the generalization capability of vehicle detectors. Finally, the contribution of VS-GANs to vehicle detection in VHR remote sensing images was proved in experiments conducted on UCAS-AOD and NWPU VHR-10 datasets using up-to-date target detection frameworks.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Liang Huang ◽  
Qiuzhi Peng ◽  
Xueqin Yu

In order to improve the change detection accuracy of multitemporal high spatial resolution remote-sensing (HSRRS) images, a change detection method of multitemporal remote-sensing images based on saliency detection and spatial intuitionistic fuzzy C-means (SIFCM) clustering is proposed. Firstly, the cluster-based saliency cue method is used to obtain the saliency maps of two temporal remote-sensing images; then, the saliency difference is obtained by subtracting the saliency maps of two temporal remote-sensing images; finally, the SIFCM clustering algorithm is used to classify the saliency difference image to obtain the change regions and unchange regions. Two data sets of multitemporal high spatial resolution remote-sensing images are selected as the experimental data. The detection accuracy of the proposed method is 96.17% and 97.89%. The results show that the proposed method is a feasible and better performance multitemporal remote-sensing image change detection method.


2020 ◽  
Vol 12 (20) ◽  
pp. 3316 ◽  
Author(s):  
Yulian Zhang ◽  
Lihong Guo ◽  
Zengfa Wang ◽  
Yang Yu ◽  
Xinwei Liu ◽  
...  

Intelligent detection and recognition of ships from high-resolution remote sensing images is an extraordinarily useful task in civil and military reconnaissance. It is difficult to detect ships with high precision because various disturbances are present in the sea such as clouds, mist, islands, coastlines, ripples, and so on. To solve this problem, we propose a novel ship detection network based on multi-layer convolutional feature fusion (CFF-SDN). Our ship detection network consists of three parts. Firstly, the convolutional feature extraction network is used to extract ship features of different levels. Residual connection is introduced so that the model can be designed very deeply, and it is easy to train and converge. Secondly, the proposed network fuses fine-grained features from shallow layers with semantic features from deep layers, which is beneficial for detecting ship targets with different sizes. At the same time, it is helpful to improve the localization accuracy and detection accuracy of small objects. Finally, multiple fused feature maps are used for classification and regression, which can adapt to ships of multiple scales. Since the CFF-SDN model uses a pruning strategy, the detection speed is greatly improved. In the experiment, we create a dataset for ship detection in remote sensing images (DSDR), including actual satellite images from Google Earth and aerial images from electro-optical pod. The DSDR dataset contains not only visible light images, but also infrared images. To improve the robustness to various sea scenes, images under different scales, perspectives and illumination are obtained through data augmentation or affine transformation methods. To reduce the influence of atmospheric absorption and scattering, a dark channel prior is adopted to solve atmospheric correction on the sea scenes. Moreover, soft non-maximum suppression (NMS) is introduced to increase the recall rate for densely arranged ships. In addition, better detection performance is observed in comparison with the existing models in terms of precision rate and recall rate. The experimental results show that the proposed detection model can achieve the superior performance of ship detection in optical remote sensing image.


2019 ◽  
Vol 11 (3) ◽  
pp. 286 ◽  
Author(s):  
Jiangqiao Yan ◽  
Hongqi Wang ◽  
Menglong Yan ◽  
Wenhui Diao ◽  
Xian Sun ◽  
...  

Recently, methods based on Faster region-based convolutional neural network (R-CNN)have been popular in multi-class object detection in remote sensing images due to their outstandingdetection performance. The methods generally propose candidate region of interests (ROIs) througha region propose network (RPN), and the regions with high enough intersection-over-union (IoU)values against ground truth are treated as positive samples for training. In this paper, we find thatthe detection result of such methods is sensitive to the adaption of different IoU thresholds. Specially,detection performance of small objects is poor when choosing a normal higher threshold, while alower threshold will result in poor location accuracy caused by a large quantity of false positives.To address the above issues, we propose a novel IoU-Adaptive Deformable R-CNN framework formulti-class object detection. Specially, by analyzing the different roles that IoU can play in differentparts of the network, we propose an IoU-guided detection framework to reduce the loss of small objectinformation during training. Besides, the IoU-based weighted loss is designed, which can learn theIoU information of positive ROIs to improve the detection accuracy effectively. Finally, the class aspectratio constrained non-maximum suppression (CARC-NMS) is proposed, which further improves theprecision of the results. Extensive experiments validate the effectiveness of our approach and weachieve state-of-the-art detection performance on the DOTA dataset.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5270
Author(s):  
Yantian Wang ◽  
Haifeng Li ◽  
Peng Jia ◽  
Guo Zhang ◽  
Taoyang Wang ◽  
...  

Deep learning-based aircraft detection methods have been increasingly implemented in recent years. However, due to the multi-resolution imaging modes, aircrafts in different images show very wide diversity on size, view and other visual features, which brings great challenges to detection. Although standard deep convolution neural networks (DCNN) can extract rich semantic features, they destroy the bottom-level location information. The features of small targets may also be submerged by redundant top-level features, resulting in poor detection. To address these problems, we proposed a compact multi-scale dense convolutional neural network (MS-DenseNet) for aircraft detection in remote sensing images. Herein, DenseNet was utilized for feature extraction, which enhances the propagation and reuse of the bottom-level high-resolution features. Subsequently, we combined feature pyramid network (FPN) with DenseNet to form a MS-DenseNet for learning multi-scale features, especially features of small objects. Finally, by compressing some of the unnecessary convolution layers of each dense block, we designed three new compact architectures: MS-DenseNet-41, MS-DenseNet-65, and MS-DenseNet-77. Comparative experiments showed that the compact MS-DenseNet-65 obtained a noticeable improvement in detecting small aircrafts and achieved state-of-the-art performance with a recall of 94% and an F1-score of 92.7% and cost less computational time. Furthermore, the experimental results on robustness of UCAS-AOD and RSOD datasets also indicate the good transferability of our method.


2019 ◽  
Vol 11 (19) ◽  
pp. 2235 ◽  
Author(s):  
Han ◽  
Kim ◽  
Yeom

A large number of evenly distributed conjugate points (CPs) in entirely overlapping regions of the images are required to achieve successful co-registration between very-high-resolution (VHR) remote sensing images. The CPs are then used to construct a non-linear transformation model that locally warps a sensed image to a reference image’s coordinates. Piecewise linear (PL) transformation is largely exploited for warping VHR images because of its superior performance as compared to the other methods. The PL transformation constructs triangular regions on a sensed image from the CPs by applying the Delaunay algorithm, after which the corresponding triangular regions in a reference image are constructed using the same CPs on the image. Each corresponding region in the sensed image is then locally warped to the regions of the reference image through an affine transformation estimated from the CPs on the triangle vertices. The warping performance of the PL transformation shows reliable results, particularly in regions inside the triangles, i.e., within the convex hulls. However, the regions outside the triangles, which are warped when the extrapolated boundary planes are extended using CPs located close to the regions, incur severe geometric distortion. In this study, we propose an effective approach that focuses on the improvement of the warping performance of the PL transformation over the external area of the triangles. Accordingly, the proposed improved piecewise linear (IPL) transformation uses additional pseudo-CPs intentionally extracted from positions on the boundary of the sensed image. The corresponding pseudo-CPs on the reference image are determined by estimating the affine transformation from CPs located close to the pseudo-CPs. The latter are simultaneously used with the former to construct the triangular regions, which are enlarged accordingly. Experiments on both simulated and real datasets, constructed from Worldview-3 and Kompsat-3A satellite images, were conducted to validate the effectiveness of the proposed IPL transformation. That transformation was shown to outperform the existing linear/non-linear transformation models such as an affine, third and fourth polynomials, local weighted mean, and PL. Moreover, we demonstrated that the IPL transformation improved the warping performance over the PL transformation outside the triangular regions by increasing the correlation coefficient values from 0.259 to 0.304, 0.603 to 0.657, and 0.180 to 0.338 in the first, second, and third real datasets, respectively.


2020 ◽  
Vol 12 (14) ◽  
pp. 2334
Author(s):  
Lu Zhao ◽  
Hongyan Ren ◽  
Cheng Cui ◽  
Yaohuan Huang

High-resolution remotely sensed imageries have been widely employed to detect urban villages (UVs) in highly urbanized regions, especially in developing countries. However, the understanding of the potential impacts of spatially and temporally differentiated urban internal development on UV detection is still limited. In this study, a partition-strategy-based framework integrating the random forest (RF) model, object-based image analysis (OBIA) method, and high-resolution remote sensing images was proposed for the UV-detection model. In the core regions of Guangzhou, four original districts were re-divided into five new zones for the subsequent object-based RF-detection of UVs with a series features, according to the different proportion of construction lands. The results show that the proposed framework has a good performance on UV detection with an average overall accuracy of 90.23% and a kappa coefficient of 0.8. It also shows the possibility of transferring samples and models into a similar area. In summary, the partition strategy is a potential solution for the improvement of the UV-detection accuracy through high-resolution remote sensing images in Guangzhou. We suggest that the spatiotemporal process of urban construction land expansion should be comprehensively understood so as to ensure an efficient UV-detection in highly urbanized regions. This study can provide some meaningful clues for city managers identifying the UVs efficiently before devising and implementing their urban planning in the future.


2020 ◽  
Vol 12 (1) ◽  
pp. 1169-1184
Author(s):  
Liang Zhong ◽  
Xiaosheng Liu ◽  
Peng Yang ◽  
Rizhi Lin

AbstractNighttime light remote sensing images show significant application potential in marine ship monitoring, but in areas where ships are densely distributed, the detection accuracy of the current methods is still limited. This article considered the LJ1-01 data as an example, compared with the National Polar-orbiting Partnership (NPP)/Visible Infrared Imaging Radiometer Suite (VIIRS) data, and explored the application of high-resolution nighttime light images in marine ship detection. The radiation values of the aforementioned two images were corrected to achieve consistency, and the interference light sources of the ship light were filtered. Then, when the threshold segmentation and two-parameter constant false alarm rate methods are combined, the ships’ location information was with obtained, and the reliability of the results was analyzed. The results show that the LJ1-01 data can not only record more potential ship light but also distinguish the ship light and background noise in the data. The detection accuracy of the LJ1-01 data in both ship detection methods is significantly higher than that of the NPP/VIIRS data. This study analyzes the characteristics, performance, and application potential of the high-resolution nighttime light data in the detection of marine vessels. The relevant results can provide a reference for the high-precision monitoring of nighttime marine ships.


2021 ◽  
Vol 13 (10) ◽  
pp. 1925
Author(s):  
Shengzhou Xiong ◽  
Yihua Tan ◽  
Yansheng Li ◽  
Cai Wen ◽  
Pei Yan

Object detection in remote sensing images (RSIs) is one of the basic tasks in the field of remote sensing image automatic interpretation. In recent years, the deep object detection frameworks of natural scene images (NSIs) have been introduced into object detection on RSIs, and the detection performance has improved significantly because of the powerful feature representation. However, there are still many challenges concerning the particularities of remote sensing objects. One of the main challenges is the missed detection of small objects which have less than five percent of the pixels of the big objects. Generally, the existing algorithms choose to deal with this problem by multi-scale feature fusion based on a feature pyramid. However, the benefits of this strategy are limited, considering that the location of small objects in the feature map will disappear when the detection task is processed at the end of the network. In this study, we propose a subtask attention network (StAN), which handles the detection task directly on the shallow layer of the network. First, StAN contains one shared feature branch and two subtask attention branches of a semantic auxiliary subtask and a detection subtask based on the multi-task attention network (MTAN). Second, the detection branch uses only low-level features considering small objects. Third, the attention map guidance mechanism is put forward to optimize the network for keeping the identification ability. Fourth, the multi-dimensional sampling module (MdS), global multi-view channel weights (GMulW) and target-guided pixel attention (TPA) are designed for further improvement of the detection accuracy in complex scenes. The experimental results on the NWPU VHR-10 dataset and DOTA dataset demonstrated that the proposed algorithm achieved the SOTA performance, and the missed detection of small objects decreased. On the other hand, ablation experiments also proved the effects of MdS, GMulW and TPA.


Sign in / Sign up

Export Citation Format

Share Document