scholarly journals A Fast and Accurate Few-Shot Detector for Objects with Fewer Pixels in Drone Image

Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 783
Author(s):  
Yuxuan Gao ◽  
Runmin Hou ◽  
Qiang Gao ◽  
Yuanlong Hou

Unmanned aerial vehicles (UAVs) are important in modern war, and object detection performance influences the development of related intelligent drone application. At present, the target categories of UAV detection tasks are diversified. However, the lack of training samples of novel categories will have a bad impact on the task. At the same time, many state-of-the-arts are not suitable for drone images due to the particularity of perspective and large number of small targets. In this paper, we design a fast few-shot detector for drone targets. It adopts the idea of anchor-free in fully convolutional one-stage object detection (FCOS), which leads to a more reasonable definition of positive and negative samples and faster speed, and introduces Siamese framework with more discriminative target model and attention mechanism to integrate similarity measures, which enables our model to match the objects of the same categories and distinguish the different class objects and background. We propose a matching score map to utilize the similarity information of attention feature map. Finally, through soft-NMS, the predicted detection bounding boxes for support category objects are generated. We construct a DAN dataset as a collection of DOTA and NWPU VHR-10. Compared with many state-of-the-arts on the DAN dataset, our model is proved to outperform them for few-shot detection tasks of drone images.

2021 ◽  
Vol 13 (18) ◽  
pp. 3608
Author(s):  
Huijie Zhang ◽  
Li An ◽  
Vena W. Chu ◽  
Douglas A. Stow ◽  
Xiaobai Liu ◽  
...  

Detecting small objects (e.g., manhole covers, license plates, and roadside milestones) in urban images is a long-standing challenge mainly due to the scale of small object and background clutter. Although convolution neural network (CNN)-based methods have made significant progress and achieved impressive results in generic object detection, the problem of small object detection remains unsolved. To address this challenge, in this study we developed an end-to-end network architecture that has three significant characteristics compared to previous works. First, we designed a backbone network module, namely Reduced Downsampling Network (RD-Net), to extract informative feature representations with high spatial resolutions and preserve local information for small objects. Second, we introduced an Adjustable Sample Selection (ADSS) module which frees the Intersection-over-Union (IoU) threshold hyperparameters and defines positive and negative training samples based on statistical characteristics between generated anchors and ground reference bounding boxes. Third, we incorporated the generalized Intersection-over-Union (GIoU) loss for bounding box regression, which efficiently bridges the gap between distance-based optimization loss and area-based evaluation metrics. We demonstrated the effectiveness of our method by performing extensive experiments on the public Urban Element Detection (UED) dataset acquired by Mobile Mapping Systems (MMS). The Average Precision (AP) of the proposed method was 81.71%, representing an improvement of 1.2% compared with the popular detection framework Faster R-CNN.


2019 ◽  
Vol 11 (24) ◽  
pp. 2930 ◽  
Author(s):  
Jinwang Wang ◽  
Jian Ding ◽  
Haowen Guo ◽  
Wensheng Cheng ◽  
Ting Pan ◽  
...  

Object detection in aerial images is a fundamental yet challenging task in remote sensing field. As most objects in aerial images are in arbitrary orientations, oriented bounding boxes (OBBs) have a great superiority compared with traditional horizontal bounding boxes (HBBs). However, the regression-based OBB detection methods always suffer from ambiguity in the definition of learning targets, which will decrease the detection accuracy. In this paper, we provide a comprehensive analysis of OBB representations and cast the OBB regression as a pixel-level classification problem, which can largely eliminate the ambiguity. The predicted masks are subsequently used to generate OBBs. To handle huge scale changes of objects in aerial images, an Inception Lateral Connection Network (ILCN) is utilized to enhance the Feature Pyramid Network (FPN). Furthermore, a Semantic Attention Network (SAN) is adopted to provide the semantic feature, which can help distinguish the object of interest from the cluttered background effectively. Empirical studies show that the entire method is simple yet efficient. Experimental results on two widely used datasets, i.e., DOTA and HRSC2016, demonstrate that the proposed method outperforms state-of-the-art methods.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Author(s):  
Runze Liu ◽  
Guangwei Yan ◽  
Hui He ◽  
Yubin An ◽  
Ting Wang ◽  
...  

Background: Power line inspection is essential to ensure the safe and stable operation of the power system. Object detection for tower equipment can significantly improve inspection efficiency. However, due to the low resolution of small targets and limited features, the detection accuracy of small targets is not easy to improve. Objective: This study aimed to improve the tiny targets’ resolution while making the small target's texture and detailed features more prominent to be perceived by the detection model. Methods: In this paper, we propose an algorithm that employs generative adversarial networks to improve small objects' detection accuracy. First, the original image is converted into a super-resolution one by a super-resolution reconstruction network (SRGAN). Then the object detection framework Faster RCNN is utilized to detect objects on the super-resolution images. Result: The experimental results on two small object recognition datasets show that the model proposed in this paper has good robustness. It can especially detect the targets missed by Faster RCNN, which indicates that SRGAN can effectively enhance the detailed information of small targets by improving the resolution. Conclusion: We found that higher resolution data is conducive to obtaining more detailed information of small targets, which can help the detection algorithm achieve higher accuracy. The small object detection model based on the generative adversarial network proposed in this paper is feasible and more efficient. Compared with Faster RCNN, this model has better performance on small object detection.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shuangjiang Du ◽  
Baofu Zhang ◽  
Pin Zhang ◽  
Peng Xiang ◽  
Hong Xue

Infrared target detection is a popular applied field in object detection as well as a challenge. This paper proposes the focus and attention mechanism-based YOLO (FA-YOLO), which is an improved method to detect the infrared occluded vehicles in the complex background of remote sensing images. Firstly, we use GAN to create infrared images from the visible datasets to make sufficient datasets for training as well as using transfer learning. Then, to mitigate the impact of the useless and complex background information, we propose the negative sample focusing mechanism to focus on the confusing negative sample training to depress the false positives and increase the detection precision. Finally, to enhance the features of the infrared small targets, we add the dilated convolutional block attention module (dilated CBAM) to the CSPdarknet53 in the YOLOv4 backbone. To verify the superiority of our model, we carefully select 318 infrared occluded vehicle images from the VIVID-infrared dataset for testing. The detection accuracy-mAP improves from 79.24% to 92.95%, and the F1 score improves from 77.92% to 88.13%, which demonstrates a significant improvement in infrared small occluded vehicle detection.


2021 ◽  
Vol 13 (22) ◽  
pp. 4517
Author(s):  
Falin Wu ◽  
Jiaqi He ◽  
Guopeng Zhou ◽  
Haolun Li ◽  
Yushuang Liu ◽  
...  

Object detection in remote sensing images plays an important role in both military and civilian remote sensing applications. Objects in remote sensing images are different from those in natural images. They have the characteristics of scale diversity, arbitrary directivity, and dense arrangement, which causes difficulties in object detection. For objects with a large aspect ratio and that are oblique and densely arranged, using an oriented bounding box can help to avoid deleting some correct detection bounding boxes by mistake. The classic rotational region convolutional neural network (R2CNN) has advantages for text detection. However, R2CNN has poor performance in the detection of slender objects with arbitrary directivity in remote sensing images, and its fault tolerance rate is low. In order to solve this problem, this paper proposes an improved R2CNN based on a double detection head structure and a three-point regression method, namely, TPR-R2CNN. The proposed network modifies the original R2CNN network structure by applying a double fully connected (2-fc) detection head and classification fusion. One detection head is for classification and horizontal bounding box regression, the other is for classification and oriented bounding box regression. The three-point regression method (TPR) is proposed for oriented bounding box regression, which determines the positions of the oriented bounding box by regressing the coordinates of the center point and the first two vertices. The proposed network was validated on the DOTA-v1.5 and HRSC2016 datasets, and it achieved a mean average precision (mAP) of 3.90% and 15.27%, respectively, from feature pyramid network (FPN) baselines with a ResNet-50 backbone.


2021 ◽  
Vol 23 (06) ◽  
pp. 47-57
Author(s):  
Aditya Kulkarni ◽  
◽  
Manali Munot ◽  
Sai Salunkhe ◽  
Shubham Mhaske ◽  
...  

With the development in technologies right from serial to parallel computing, GPU, AI, and deep learning models a series of tools to process complex images have been developed. The main focus of this research is to compare various algorithms(pre-trained models) and their contributions to process complex images in terms of performance, accuracy, time, and their limitations. The pre-trained models we are using are CNN, R-CNN, R-FCN, and YOLO. These models are python language-based and use libraries like TensorFlow, OpenCV, and free image databases (Microsoft COCO and PAS-CAL VOC 2007/2012). These not only aim at object detection but also on building bounding boxes around appropriate locations. Thus, by this review, we get a better vision of these models and their performance and a good idea of which models are ideal for various situations.


Author(s):  
J. T. Velikovsky

A universal problem in the disciplines of communication, creativity, philosophy, biology, psychology, sociology, anthropology, archaeology, history, linguistics, information science, cultural studies, literature, media and other domains of knowledge in both the arts and sciences has been the definition of ‘culture' (see Kroeber & Kluckhohn, 1952; Baldwin et al., 2006), including the specification of ‘the unit of culture', and, mechanisms of culture. This chapter proposes a theory of the unit of culture, or, the ‘meme' (Dawkins, 1976; Dennett, 1995; Blackmore, 1999), a unit which is also the narreme (Barthes, 1966), or ‘unit of story', or ‘unit of narrative'. The holon/parton theory of the unit of culture (Velikovsky, 2014) is a consilient (Wilson, 1998) synthesis of (Koestler, 1964, 1967, 1978) and Feynman (1975, 2005) and also the Evolutionary Systems Theory model of creativity (Csikszentmihalyi, 1988-2014; Simonton, 1984-2014). This theory of the unit of culture potentially has applications across all creative cultural domains and disciplines in the sciences, arts and communication media.


Author(s):  
P. Ravi Shankar

Medical Humanities (MH) provide a contrasting perspective of the arts to the ‘science’ of medicine. A definition of MH agreed upon by all workers is lacking. There are a number of advantages of teaching MH to medical students. MH programs are common in medical schools in developed nations. In developing nations these are not common and in the chapter the author describes programs in Brazil, Turkey, Argentina and Nepal. The relationship between medical ethics and MH is the subject of debate. Medical ethics teaching appears to be commoner compared to MH in medical schools. MH programs are not common in Asia and there are many challenges to MH teaching. Patient and illness narratives are become commoner in medical education. The author has conducted MH programs in two Nepalese medical schools and shares his experiences.


Sign in / Sign up

Export Citation Format

Share Document