scholarly journals High-Speed Ship Detection in SAR Images Based on a Grid Convolutional Neural Network

2019 ◽  
Vol 11 (10) ◽  
pp. 1206 ◽  
Author(s):  
Tianwen Zhang ◽  
Xiaoling Zhang

As an active microwave sensor, synthetic aperture radar (SAR) has the characteristic of all-day and all-weather earth observation, which has become one of the most important means for high-resolution earth observation and global resource management. Ship detection in SAR images is also playing an increasingly important role in ocean observation and disaster relief. Nowadays, both traditional feature extraction methods and deep learning (DL) methods almost focus on improving ship detection accuracy, and the detection speed is neglected. However, the speed of SAR ship detection is extraordinarily significant, especially in real-time maritime rescue and emergency military decision-making. In order to solve this problem, this paper proposes a novel approach for high-speed ship detection in SAR images based on a grid convolutional neural network (G-CNN). This method improves the detection speed by meshing the input image, inspired by the basic thought of you only look once (YOLO), and using depthwise separable convolution. G-CNN is a brand new network structure proposed by us and it is mainly composed of a backbone convolutional neural network (B-CNN) and a detection convolutional neural network (D-CNN). First, SAR images to be detected are divided into grid cells and each grid cell is responsible for detection of specific ships. Then, the whole image is input into B-CNN to extract features. Finally, ship detection is completed in D-CNN under three scales. We experimented on an open SAR Ship Detection Dataset (SSDD) used by many other scholars and then validated the migration ability of G-CNN on two SAR images from RadarSat-1 and Gaofen-3. The experimental results show that the detection speed of our proposed method is faster than the existing other methods, such as faster-regions convolutional neural network (Faster R-CNN), single shot multi-box detector (SSD), and YOLO, under the same hardware environment with NVIDIA GTX1080 graphics processing unit (GPU) and the detection accuracy is kept within an acceptable range. Our proposed G-CNN ship detection system has great application values in real-time maritime disaster rescue and emergency military strategy formulation.

2019 ◽  
Vol 11 (21) ◽  
pp. 2483 ◽  
Author(s):  
Zhang ◽  
Zhang ◽  
Shi ◽  
Wei

As an active microwave imaging sensor for the high-resolution earth observation, synthetic aperture radar (SAR) has been extensively applied in military, agriculture, geology, ecology, oceanography, etc., due to its prominent advantages of all-weather and all-time working capacity. Especially, in the marine field, SAR can provide numerous high-quality services for fishery management, traffic control, sea-ice monitoring, marine environmental protection, etc. Among them, ship detection in SAR images has attracted more and more attention on account of the urgent requirements of maritime rescue and military strategy formulation. Nowadays, most researches are focusing on improving the ship detection accuracy, while the detection speed is frequently neglected, regardless of traditional feature extraction methods or modern deep learning (DL) methods. However, the high-speed SAR ship detection is of great practical value, because it can provide real-time maritime disaster rescue and emergency military planning. Therefore, in order to address this problem, we proposed a novel high-speed SAR ship detection approach by mainly using depthwise separable convolution neural network (DS-CNN). In this approach, we integrated multi-scale detection mechanism, concatenation mechanism and anchor box mechanism to establish a brand-new light-weight network architecture for the high-speed SAR ship detection. We used DS-CNN, which consists of a depthwise convolution (D-Conv2D) and a pointwise convolution (P-Conv2D), to substitute for the conventional convolution neural network (C-CNN). In this way, the number of network parameters gets obviously decreased, and the ship detection speed gets dramatically improved. We experimented on an open SAR ship detection dataset (SSDD) to validate the correctness and feasibility of the proposed method. To verify the strong migration capacity of our method, we also carried out actual ship detection on a wide-region large-size Sentinel-1 SAR image. Ultimately, under the same hardware platform with NVIDIA RTX2080Ti GPU, the experimental results indicated that the ship detection speed of our proposed method is faster than other methods, meanwhile the detection accuracy is only lightly sacrificed compared with the state-of-art object detectors. Our method has great application value in real-time maritime disaster rescue and emergency military planning.


2020 ◽  
Vol 10 (14) ◽  
pp. 4720 ◽  
Author(s):  
Zhiqiang Teng ◽  
Shuai Teng ◽  
Jiqiao Zhang ◽  
Gongfa Chen ◽  
Fangsen Cui

The traditional methods of structural health monitoring (SHM) have obvious disadvantages such as being time-consuming, laborious and non-synchronizing, and so on. This paper presents a novel and efficient approach to detect structural damages from real-time vibration signals via a convolutional neural network (CNN). As vibration signals (acceleration) reflect the structural response to the changes of the structural state, hence, a CNN, as a classifier, can map vibration signals to the structural state and detect structural damages. As it is difficult to obtain enough damage samples in practical engineering, finite element analysis (FEA) provides an alternative solution to this problem. In this paper, training samples for the CNN are obtained using FEA of a steel frame, and the effectiveness of the proposed detection method is evaluated by inputting the experimental data into the CNN. The results indicate that, the detection accuracy of the CNN trained using FEA data reaches 94% for damages introduced in the numerical model and 90% for damages in the real steel frame. It is demonstrated that the CNN has an ideal detection effect for both single damage and multiple damages. The combination of FEA and experimental data provides enough training and testing samples for the CNN, which improves the practicability of the CNN-based detection method in engineering practice.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1186
Author(s):  
Ranjana Koshy ◽  
Ausif Mahmood

Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best current approaches use a two-step process of first applying non-linear anisotropic diffusion to the incoming image and then using a deep network for final liveness decision. Such an approach is not viable for real-time face liveness detection. We develop two end-to-end real-time solutions where nonlinear anisotropic diffusion based on an additive operator splitting scheme is first applied to an incoming static image, which enhances the edges and surface texture, and preserves the boundary locations in the real image. The diffused image is then forwarded to a pre-trained Specialized Convolutional Neural Network (SCNN) and the Inception network version 4, which identify the complex and deep features for face liveness classification. We evaluate the performance of our integrated approach using the SCNN and Inception v4 on the Replay-Attack dataset and Replay-Mobile dataset. The entire architecture is created in such a manner that, once trained, the face liveness detection can be accomplished in real-time. We achieve promising results of 96.03% and 96.21% face liveness detection accuracy with the SCNN, and 94.77% and 95.53% accuracy with the Inception v4, on the Replay-Attack, and Replay-Mobile datasets, respectively. We also develop a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Even though the use of CNN followed by LSTM is not new, combining it with diffusion (that has proven to be the best approach for single image liveness detection) is novel. Performance evaluation of our architecture on the REPLAY-ATTACK dataset gave 98.71% test accuracy and 2.77% Half Total Error Rate (HTER), and on the REPLAY-MOBILE dataset gave 95.41% accuracy and 5.28% HTER.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Yao Chen ◽  
Tao Duan ◽  
Changyuan Wang ◽  
Yuanyuan Zhang ◽  
Mo Huang

Ship detection on synthetic aperture radar (SAR) imagery has many valuable applications for both civil and military fields and has received extraordinary attention in recent years. The traditional detection methods are insensitive to multiscale ships and usually time-consuming, results in low detection accuracy and limitation for real-time processing. To balance the accuracy and speed, an end-to-end ship detection method for complex inshore and offshore scenes based on deep convolutional neural networks (CNNs) is proposed in this paper. First, the SAR images are divided into different grids, and the anchor boxes are predefined based on the responsible grids for dense ship prediction. Then, Darknet-53 with residual units is adopted as a backbone to extract features, and a top-down pyramid structure is added for multiscale feature fusion with concatenation. By this means, abundant hierarchical features containing both spatial and semantic information are extracted. Meanwhile, the strategies such as soft non-maximum suppression (Soft-NMS), mix-up and mosaic data augmentation, multiscale training, and hybrid optimization are used for performance enhancement. Besides, the model is trained from scratch to avoid learning objective bias of pretraining. The proposed one-stage method adopts end-to-end inference by a single network, so the detection speed can be guaranteed due to the concise paradigm. Extensive experiments are performed on the public SAR ship detection dataset (SSDD), and the results show that the method can detect both inshore and offshore ships with higher accuracy than other mainstream methods, yielding the accuracy with an average of 95.52%, and the detection speed is quite fast with about 72 frames per second (FPS). The actual Sentinel-1 and Gaofen-3 data are utilized for verification, and the detection results also show the effectiveness and robustness of the method.


Author(s):  
Guoqiang Chen ◽  
Mengchao Liu ◽  
Hongpeng Zhou ◽  
Bingxin Bai

Background: The vehicle pose detection plays an important role in monitoring vehicle behavior and the parking situation. The real-time detection of vehicle pose with high accuracy is of great importance. Objective: The goal of the work is to construct a new network to detect the vehicle angle based on the regression Convolutional Neural Network (CNN). The main contribution is that several traditional regression CNNs are combined as the Multi-Collaborative Regression CNN (MCR-CNN), which greatly enhances the vehicle angle detection precision and eliminates the abnormal detection error. Methods: Two challenges with respect to the traditional regression CNN have been revealed in detecting the vehicle pose angle. The first challenge is the detection failure resulting from the conversion of the periodic angle to the linear angle, while the second is the big detection error if the training sample value is very small. An MCR-CNN is proposed to solve the first challenge. And a 2- stage method is proposed to solve the second challenge. The architecture of the MCR-CNN is designed in detail. After the training and testing data sets are constructed, the MCR-CNN is trained and tested for vehicle angle detection. Results: The experimental results show that the testing samples with the error below 4° account for 95% of the total testing samples based on the proposed MCR-CNN. The MCR-CNN has significant advantages over the traditional vehicle pose detection method. Conclusion: The proposed MCR-CNN cannot only detect the vehicle angle in real-time, but also has a very high detection accuracy and robustness. The proposed approach can be used for autonomous vehicles and monitoring of the parking lot.


2020 ◽  
Vol 16 (3) ◽  
pp. 155014772091295 ◽  
Author(s):  
Zhijing Xu ◽  
Yuhao Huo ◽  
Kun Liu ◽  
Sidong Liu

Deep learning algorithms have been increasingly used in ship image detection and classification. To improve the ship detection and classification in photoelectric images, an improved recurrent attention convolutional neural network is proposed. The proposed network has a multi-scale architecture and consists of three cascading sub-networks, each with a VGG19 network for image feature extraction and an attention proposal network for locating feature area. A scale-dependent pooling algorithm is designed to select an appropriate convolution in the VGG19 network for classification, and a multi-feature mechanism is introduced in attention proposal network to describe the feature regions. The VGG19 and attention proposal network are cross-trained to accelerate convergence and to improve detection accuracy. The proposed method is trained and validated on a self-built ship database and effectively improve the detection accuracy to 86.7% outperforming the baseline VGG19 and recurrent attention convolutional neural network methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Huaiguang Liu ◽  
Wancheng Ding ◽  
Qianwen Huang ◽  
Li Fang

The defects of solar cell component (SCC) will affect the service life and power generation efficiency. In this paper, the defect images of SCC were taken by the photoluminescence (PL) method and processed by an advanced lightweight convolutional neural network (CNN). Firstly, in order to solve the high pixel SCC image detection, each silicon wafer image was segmented based on local difference extremum of edge projection (LDEEP). Secondly, in order to detect the defects with small size or weak edges in the silicon wafer, an improved lightweight CNN model with deep backbone feature extraction network structure was proposed, as the enhancing feature fusion layer and the three-scale feature prediction layer; the model provided more feature detail. The final experimental results showed that the improved model achieves a good balance between the detection accuracy and detection speed, with the mean average precision (mAP) reaching 87.55%, which was 6.78% higher than the original algorithm. Moreover, the detection speed reached 40 frames per second (fps), which meets requirements of precision and real-time detection. The detection method can better complete the defect detection task of SCC, which lays the foundation for automatic detection of SCC defects.


Sign in / Sign up

Export Citation Format

Share Document