scholarly journals A Deep Detection Network Based on Interaction of Instance Segmentation and Object Detection for SAR Images

2021 ◽  
Vol 13 (13) ◽  
pp. 2582
Author(s):  
Zitong Wu ◽  
Biao Hou ◽  
Bo Ren ◽  
Zhongle Ren ◽  
Shuang Wang ◽  
...  

Ship detection is a challenging task for synthetic aperture radar (SAR) images. Ships have arbitrary directionality and multiple scales in SAR images. Furthermore, there is a lot of clutter near the ships. Traditional detection algorithms are not robust to these situations and easily cause redundancy in the detection area. With the continuous improvement in resolution, the traditional algorithms cannot achieve high-precision ship detection in SAR images. An increasing number of deep learning algorithms have been applied to SAR ship detection. In this study, a new ship detection network, known as the instance segmentation assisted ship detection network (ISASDNet), is presented. ISASDNet is a two-stage detection network with two branches. A branch is called an object branch and can extract object-level information to obtain positioning bounding boxes and classification results. Another branch called the pixel branch can be utilized for instance segmentation. In the pixel branch, the designed global relational inference layer maps the features to interaction space to learn the relationship between ship and background. The global reasoning module (GRM) based on global relational inference layers can better extract the instance segmentation results of ships. A mask assisted ship detection module (MASDM) is behind the two branches. The MASDM can improve detection results by interacting with the outputs of the two branches. In addition, a strategy is designed to extract the mask of SAR ships, which enables ISASDNet to perform object detection training and instance segmentation training at the same time. Experiments carried out two different datasets demonstrated the superiority of ISASDNet over other networks.

Author(s):  
Jiajia Liao ◽  
Yujun Liu ◽  
Yingchao Piao ◽  
Jinhe Su ◽  
Guorong Cai ◽  
...  

AbstractRecent advances in camera-equipped drone applications increased the demand for visual object detection algorithms with deep learning for aerial images. There are several limitations in accuracy for a single deep learning model. Inspired by ensemble learning can significantly improve the generalization ability of the model in the machine learning field, we introduce a novel integration strategy to combine the inference results of two different methods without non-maximum suppression. In this paper, a global and local ensemble network (GLE-Net) was proposed to increase the quality of predictions by considering the global weights for different models and adjusting the local weights for bounding boxes. Specifically, the global module assigns different weights to models. In the local module, we group the bounding boxes that corresponding to the same object as a cluster. Each cluster generates a final predict box and assigns the highest score in the cluster as the score of the final predict box. Experiments on benchmarks VisDrone2019 show promising performance of GLE-Net compared with the baseline network.


2018 ◽  
Vol 232 ◽  
pp. 04036
Author(s):  
Jun Yin ◽  
Huadong Pan ◽  
Hui Su ◽  
Zhonggeng Liu ◽  
Zhirong Peng

We propose an object detection method that predicts the orientation bounding boxes (OBB) to estimate objects locations, scales and orientations based on YOLO (You Only Look Once), which is one of the top detection algorithms performing well both in accuracy and speed. Horizontal bounding boxes(HBB), which are not robust to orientation variances, are used in the existing object detection methods to detect targets. The proposed orientation invariant YOLO (OIYOLO) detector can effectively deal with the bird’s eye viewpoint images where the orientation angles of the objects are arbitrary. In order to estimate the rotated angle of objects, we design a new angle loss function. Therefore, the training of OIYOLO forces the network to learn the annotated orientation angle of objects, making OIYOLO orientation invariances. The proposed approach that predicts OBB can be applied in other detection frameworks. In additional, to evaluate the proposed OIYOLO detector, we create an UAV-DAHUA datasets that annotated with objects locations, scales and orientation angles accurately. Extensive experiments conducted on UAV-DAHUA and DOTA datasets demonstrate that OIYOLO achieves state-of-the-art detection performance with high efficiency comparing with the baseline YOLO algorithms.


2020 ◽  
Vol 34 (07) ◽  
pp. 12208-12215 ◽  
Author(s):  
Shaoru Wang ◽  
Yongchao Gong ◽  
Junliang Xing ◽  
Lichao Huang ◽  
Chang Huang ◽  
...  

Object detection and instance segmentation are two fundamental computer vision tasks. They are closely correlated but their relations have not yet been fully explored in most previous work. This paper presents RDSNet, a novel deep architecture for reciprocal object detection and instance segmentation. To reciprocate these two tasks, we design a two-stream structure to learn features on both the object level (i.e., bounding boxes) and the pixel level (i.e., instance masks) jointly. Within this structure, information from the two streams is fused alternately, namely information on the object level introduces the awareness of instance and translation variance to the pixel level, and information on the pixel level refines the localization accuracy of objects on the object level in return. Specifically, a correlation module and a cropping module are proposed to yield instance masks, as well as a mask based boundary refinement module for more accurate bounding boxes. Extensive experimental analyses and comparisons on the COCO dataset demonstrate the effectiveness and efficiency of RDSNet. The source code is available at https://github.com/wangsr126/RDSNet.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 279
Author(s):  
Rafael Padilla ◽  
Wesley L. Passos ◽  
Thadeu L. B. Dias ◽  
Sergio L. Netto ◽  
Eduardo A. B. da Silva

Recent outstanding results of supervised object detection in competitions and challenges are often associated with specific metrics and datasets. The evaluation of such methods applied in different contexts have increased the demand for annotated datasets. Annotation tools represent the location and size of objects in distinct formats, leading to a lack of consensus on the representation. Such a scenario often complicates the comparison of object detection methods. This work alleviates this problem along the following lines: (i) It provides an overview of the most relevant evaluation methods used in object detection competitions, highlighting their peculiarities, differences, and advantages; (ii) it examines the most used annotation formats, showing how different implementations may influence the assessment results; and (iii) it provides a novel open-source toolkit supporting different annotation formats and 15 performance metrics, making it easy for researchers to evaluate the performance of their detection algorithms in most known datasets. In addition, this work proposes a new metric, also included in the toolkit, for evaluating object detection in videos that is based on the spatio-temporal overlap between the ground-truth and detected bounding boxes.


2021 ◽  
Vol 13 (21) ◽  
pp. 4384
Author(s):  
Danpei Zhao ◽  
Chunbo Zhu ◽  
Jing Qi ◽  
Xinhu Qi ◽  
Zhenhua Su ◽  
...  

This paper takes account of the fact that there is a lack of consideration for imaging methods and target characteristics of synthetic aperture radar (SAR) images among existing instance segmentation methods designed for optical images. Thus, we propose a method for SAR ship instance segmentation based on the synergistic attention mechanism which not only improves the performance of ship detection with multi-task branches but also provides pixel-level contours for subsequent applications such as orientation or category determination. The proposed method—SA R-CNN—presents a synergistic attention strategy at the image, semantic, and target level with the following module corresponding to the different stages in the whole process of the instance segmentation framework. The global attention module (GAM), semantic attention module(SAM), and anchor attention module (AAM) were constructed for feature extraction, feature fusion, and target location, respectively, for multi-scale ship targets under complex background conditions. Compared with several state-of-the-art methods, our method reached 68.7 AP in detection and 56.5 AP in segmentation on the HRSID dataset, and showed 91.5 AP in the detection task on the SSDD dataset.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6391
Author(s):  
Hongjie Deng ◽  
Daji Ergu ◽  
Fangyao Liu ◽  
Bo Ma ◽  
Ying Cai

With the continuous development of artificial intelligence, embedding object detection algorithms into autonomous underwater detectors for marine garbage cleanup has become an emerging application area. Considering the complexity of the marine environment and the low resolution of the images taken by underwater detectors, this paper proposes an improved algorithm based on Mask R-CNN, with the aim of achieving high accuracy marine garbage detection and instance segmentation. First, the idea of dilated convolution is introduced in the Feature Pyramid Network to enhance feature extraction ability for small objects. Secondly, the spatial-channel attention mechanism is used to make features learn adaptively. It can effectively focus attention on detection objects. Third, the re-scoring branch is added to improve the accuracy of instance segmentation by scoring the predicted masks based on the method of Generalized Intersection over Union. Finally, we train the proposed algorithm in this paper on the Transcan dataset, evaluating its effectiveness by various metrics and comparing it with existing algorithms. The experimental results show that compared to the baseline provided by the Transcan dataset, the algorithm in this paper improves the mAP indexes on the two tasks of garbage detection and instance segmentation by 9.6 and 5.0, respectively, which significantly improves the algorithm performance. Thus, it can be better applied in the marine environment and achieve high precision object detection and instance segmentation.


Author(s):  
Prof. Pradnya Kasture ◽  
Aishwarya Kumkar ◽  
Yash Jagtap ◽  
Akshay Tangade ◽  
Aditya Pole

Vision is one in every of the foremost necessary human senses and it plays a really necessary role in human interaction with the surrounding objects. Until now many papers have been published on these topics that shows various different computer vision products and services by developing new electronic devices for the visually disabled people. The aim is to study different object detection methods. As compared to other Object detection methods, YOLO method has multiple advantages. In alternative algorithms like CNN, Fast-CNN the algorithmic program won't investigate the image fully however in YOLO the algorithmic program investigate the image fully by predicting the bounding boxes by making use of convolutional network and possibilities for these boxes and detects the image quicker as compared to alternative algorithms.


2020 ◽  
Vol 34 (07) ◽  
pp. 12677-12684
Author(s):  
Jingru Yi ◽  
Hui Tang ◽  
Pengxiang Wu ◽  
Bo Liu ◽  
Daniel J. Hoeppner ◽  
...  

Instance segmentation of biological images is essential for studying object behaviors and properties. The challenges, such as clustering, occlusion, and adhesion problems of the objects, make instance segmentation a non-trivial task. Current box-free instance segmentation methods typically rely on local pixel-level information. Due to a lack of global object view, these methods are prone to over- or under-segmentation. On the contrary, the box-based instance segmentation methods incorporate object detection into the segmentation, performing better in identifying the individual instances. In this paper, we propose a new box-based instance segmentation method. Mainly, we locate the object bounding boxes from their center points. The object features are subsequently reused in the segmentation branch as a guide to separate the clustered instances within an RoI patch. Along with the instance normalization, the model is able to recover the target object distribution and suppress the distribution of neighboring attached objects. Consequently, the proposed model performs excellently in segmenting the clustered objects while retaining the target object details. The proposed method achieves state-of-the-art performances on three biological datasets: cell nuclei, plant phenotyping dataset, and neural cells.


2020 ◽  
Author(s):  
◽  
Yang Liu

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] With the rapid development of deep learning in computer vision, especially deep convolutional neural networks (CNNs), significant advances have been made in recent years on object recognition and detection in images. Highly accurate detection results have been achieved for large objects, whereas detection accuracy on small objects remains to be low. This dissertation focuses on investigating deep learning methods for small object detection in images and proposing new methods with improved performance. First, we conducted a comprehensive review of existing deep learning methods for small object detections, in which we summarized and categorized major techniques and models, identified major challenges, and listed some future research directions. Existing techniques were categorized into using contextual information, combining multiple feature maps, creating sufficient positive examples, and balancing foreground and background examples. Methods developed in four related areas, generic object detection, face detection, object detection in aerial imagery, and segmentation, were summarized and compared. In addition, the performances of several leading deep learning methods for small object detection, including YOLOv3, Faster R-CNN, and SSD, were evaluated based on three large benchmark image datasets of small objects. Experimental results showed that Faster R-CNN performed the best, while YOLOv3 was a close second. Furthermore, a new deep learning method, called Retina-context Net, was proposed and outperformed state-of-the art one-stage deep learning models, including SSD, YOLOv3 and RetinaNet, on the COCO and SUN benchmark datasets. Secondly, we created a new dataset for bird detection, called Little Birds in Aerial Imagery (LBAI), from real-life aerial imagery. LBAI contains birds with sizes ranging from 10 by 10 pixels to 40 by 40 pixels. We adapted and applied several state-of-the-art deep learning models to LBAI, including object detection models such as YOLOv2, SSH, and Tiny Face, and instance segmentation models such as U-Net and Mask R-CNN. Our empirical results illustrated the strength and weakness of these methods, showing that SSH performed the best for easy cases, whereas Tiny Face performed the best for hard cases with cluttered backgrounds. Among small instance segmentation methods, U-Net achieved slightly better performance than Mask R-CNN. Thirdly, we proposed a new graph neural network-based object detection algorithm, called GODM, to take the spatial information of candidate objects into consideration in small object detection. Instead of detecting small objects independently as the existing deep learning methods do, GODM treats the candidate bounding boxes generated by existing object detectors as nodes and creates edges based on the spatial or semantic relationship between the candidate bounding boxes. GODM contains four major components: node feature generation, graph generation, node class labelling, and graph convolutional neural network model. Several graph generation methods were proposed. Experimental results on the LBDA dataset show that GODM outperformed existing state-of-the-art object detector Faster R-CNN significantly, up to 12% better in accuracy. Finally, we proposed a new computer vision-based grass analysis using machine learning. To deal with the variation of lighting condition, a two-stage segmentation strategy is proposed for grass coverage computation based on a blackboard background. On a real world dataset we collected from natural environments, the proposed method was robust to varying environments, lighting, and colors. For grass detection and coverage computation, the error rate was just 3%.


Sign in / Sign up

Export Citation Format

Share Document