scholarly journals Scale-Adaptive Adversarial Patch Attack for Remote Sensing Image Aircraft Detection

2021 ◽  
Vol 13 (20) ◽  
pp. 4078
Author(s):  
Mingming Lu ◽  
Qi Li ◽  
Li Chen ◽  
Haifeng Li

With the adversarial attack of convolutional neural networks (CNNs), we are able to generate adversarial patches to make an aircraft undetectable by object detectors instead of covering the aircraft with large camouflage nets. However, aircraft in remote sensing images (RSIs) have the problem of large variations in scale, which can easily cause size mismatches between an adversarial patch and an aircraft. A small adversarial patch has no attack effect on large aircraft, and a large adversarial patch will completely cover small aircraft so that it is impossible to judge whether the adversarial patch has an attack effect. Therefore, we propose the adversarial attack method Patch-Noobj for the problem of large-scale variation in aircraft in RSIs. Patch-Noobj adaptively scales the width and height of the adversarial patch according to the size of the attacked aircraft and generates a universal adversarial patch that can attack aircraft of different sizes. In the experiment, we use the YOLOv3 detector to verify the effectiveness of Patch-Noobj on multiple datasets. The experimental results demonstrate that our universal adversarial patches are well adapted to aircraft of different sizes on multiple datasets and effectively reduce the Average Precision (AP) of the YOLOv3 detector on the DOTA, NWPU VHR-10, and RSOD datasets by 48.2%, 23.9%, and 20.2%, respectively. Moreover, the universal adversarial patch generated on one dataset is also effective in attacking aircraft on the remaining two datasets, while the adversarial patch generated on YOLOv3 is also effective in attacking YOLOv5 and Faster R-CNN, which demonstrates the attack transferability of the adversarial patch.

Author(s):  
Xiaochuan Tang ◽  
Mingzhe Liu ◽  
Hao Zhong ◽  
Yuanzhen Ju ◽  
Weile Li ◽  
...  

Landslide recognition is widely used in natural disaster risk management. Traditional landslide recognition is mainly conducted by geologists, which is accurate but inefficient. This article introduces multiple instance learning (MIL) to perform automatic landslide recognition. An end-to-end deep convolutional neural network is proposed, referred to as Multiple Instance Learning–based Landslide classification (MILL). First, MILL uses a large-scale remote sensing image classification dataset to build pre-train networks for landslide feature extraction. Second, MILL extracts instances and assign instance labels without pixel-level annotations. Third, MILL uses a new channel attention–based MIL pooling function to map instance-level labels to bag-level label. We apply MIL to detect landslides in a loess area. Experimental results demonstrate that MILL is effective in identifying landslides in remote sensing images.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3232 ◽  
Author(s):  
Yan Liu ◽  
Qirui Ren ◽  
Jiahui Geng ◽  
Meng Ding ◽  
Jiangyun Li

Efficient and accurate semantic segmentation is the key technique for automatic remote sensing image analysis. While there have been many segmentation methods based on traditional hand-craft feature extractors, it is still challenging to process high-resolution and large-scale remote sensing images. In this work, a novel patch-wise semantic segmentation method with a new training strategy based on fully convolutional networks is presented to segment common land resources. First, to handle the high-resolution image, the images are split as local patches and then a patch-wise network is built. Second, training data is preprocessed in several ways to meet the specific characteristics of remote sensing images, i.e., color imbalance, object rotation variations and lens distortion. Third, a multi-scale training strategy is developed to solve the severe scale variation problem. In addition, the impact of conditional random field (CRF) is studied to improve the precision. The proposed method was evaluated on a dataset collected from a capital city in West China with the Gaofen-2 satellite. The dataset contains ten common land resources (Grassland, Road, etc.). The experimental results show that the proposed algorithm achieves 54.96% in terms of mean intersection over union (MIoU) and outperforms other state-of-the-art methods in remote sensing image segmentation.


Author(s):  
Cunguang Zhang ◽  
Hongxun Jiang ◽  
Riwei Pan ◽  
Haiheng Cao ◽  
Mingliang Zhou

Sea-land segmentation based on edge detection is commonly utilized in ship detection, coastline extraction, and satellite system applications due to its high accuracy and rapid speed. Pixel-level distribution statistics do not currently satisfy the requirements for high-resolution, large-scale remote sensing image processing. To address the above problem, in this paper, we propose a high-throughput hardware architecture for sea-land segmentation based on multi-dimensional parallel characteristics. The proposed architecture is well suited to wide remote sensing images. Efficient multi-dimensional block level statistics allow for relatively infrequent pixel-level memory access; a boundary block tracking process replaces the whole-image scanning process, markedly enhancing efficiency. The tracking efficiency is further improved by a convenient two-step scanning strategy that feeds back the path state in a timely manner for a large number of blocks in the same direction appearing in the algorithm. The proposed architecture was deployed on Xilinx Virtex k7-410t to find that its practical processing time for a [Formula: see text] remote sensing image is only about 0.4[Formula: see text]s. The peak performance is 1.625[Formula: see text]gbps, which is higher than other FPGA implementations of segmentation algorithms. The proposed structure is highly competitive in processing wide remote sensing images.


2019 ◽  
Vol 12 (1) ◽  
pp. 101 ◽  
Author(s):  
Lirong Han ◽  
Peng Li ◽  
Xiao Bai ◽  
Christos Grecos ◽  
Xiaoyu Zhang ◽  
...  

Recently, the demand for remote sensing image retrieval is growing and attracting the interest of many researchers because of the increasing number of remote sensing images. Hashing, as a method of retrieving images, has been widely applied to remote sensing image retrieval. In order to improve hashing performance, we develop a cohesion intensive deep hashing model for remote sensing image retrieval. The underlying architecture of our deep model is motivated by the state-of-the-art residual net. Residual nets aim at avoiding gradient vanishing and gradient explosion when the net reaches a certain depth. However, different from the residual net which outputs multiple class-labels, we present a residual hash net that is terminated by a Heaviside-like function for binarizing remote sensing images. In this scenario, the representational power of the residual net architecture is exploited to establish an end-to-end deep hashing model. The residual hash net is trained subject to a weighted loss strategy that intensifies the cohesiveness of image hash codes within one class. This effectively addresses the data imbalance problem normally arising in remote sensing image retrieval tasks. Furthermore, we adopted a gradualness optimization method for obtaining optimal model parameters in order to favor accurate binary codes with little quantization error. We conduct comparative experiments on large-scale remote sensing data sets such as UCMerced and AID. The experimental results validate the hypothesis that our method improves the performance of current remote sensing image retrieval.


2019 ◽  
Vol 11 (24) ◽  
pp. 3008 ◽  
Author(s):  
Ziqi Gu ◽  
Zongqian Zhan ◽  
Qiangqiang Yuan ◽  
Li Yan

Remote sensing image dehazing is an extremely complex issue due to the irregular and non-uniform distribution of haze. In this paper, a prior-based dense attentive dehazing network (DADN) is proposed for single remote sensing image haze removal. The proposed network, which is constructed based on dense blocks and attention blocks, contains an encoder-decoder architecture, which enables it to directly learn the mapping between the input images and the corresponding haze-free image, without being dependent on the traditional atmospheric scattering model (ASM). To better handle non-uniform hazy remote sensing images, we propose to combine a haze density prior with deep learning, where an initial haze density map (HDM) is firstly extracted from the original hazy image, and is subsequently utilized as the input of the network, together with the original hazy image. Meanwhile, a large-scale hazy remote sensing dataset is created for training and testing of the proposed method, which contains both uniform and non-uniform, synthetic and real hazy remote sensing images. Experimental results on the created dataset illustrate that the developed dehazing method obtains significant progresses over the state-of-the-art methods.


2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Xiu Zhang

Image has become one of the important carriers of visual information because of its large amount of information, easy to spread and store, and strong sense of sense. At the same time, the quality of image is also related to the completeness and accuracy of information transmission. This research mainly discusses the superresolution reconstruction of remote sensing images based on the middle layer supervised convolutional neural network. This paper designs a convolutional neural network with middle layer supervision. There are 16 layers in total, and the seventh layer is designed as an intermediate supervision layer. At present, there are many researches on traditional superresolution reconstruction algorithms and convolutional neural networks, but there are few researches that combine the two together. Convolutional neural network can obtain the high-frequency features of the image and strengthen the detailed information; so, it is necessary to study its application in image reconstruction. This article will separately describe the current research status of image superresolution reconstruction and convolutional neural networks. The middle supervision layer defines the error function of the supervision layer, which is used to optimize the error back propagation mechanism of the convolutional neural network to improve the disappearance of the gradient of the deep convolutional neural network. The algorithm training is mainly divided into four stages: the original remote sensing image preprocessing, the remote sensing image temporal feature extraction stage, the remote sensing image spatial feature extraction stage, and the remote sensing image reconstruction output layer. The last layer of the network draws on the single-frame remote sensing image SRCNN algorithm. The output layer overlaps and adds the remote sensing images of the previous layer, averages the overlapped blocks, eliminates the block effect, and finally obtains high-resolution remote sensing images, which is also equivalent to filter operation. In order to allow users to compare the superresolution effect of remote sensing images more clearly, this paper uses the Qt5 interface library to implement the user interface of the remote sensing image superresolution software platform and uses the intermediate layer convolutional neural network and the remote sensing image superresolution reconstruction algorithm proposed in this paper. When the training epoch reaches 35 times, the network has converged. At this time, the loss function converges to 0.017, and the cumulative time is about 8 hours. This research helps to improve the visual effects of remote sensing images.


Sign in / Sign up

Export Citation Format

Share Document