Improving Evolution-COnstructed features using speciation for general object detection

Author(s):  
Kirt Lillywhite ◽  
Dah-Jye Lee ◽  
Beau Tippetts
2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Kaifeng Li ◽  
Bin Wang

With the rapid development of deep learning and the wide usage of Unmanned Aerial Vehicles (UAVs), CNN-based algorithms of vehicle detection in aerial images have been widely studied in the past several years. As a downstream task of the general object detection, there are some differences between the vehicle detection in aerial images and the general object detection in ground view images, e.g., larger image areas, smaller target sizes, and more complex background. In this paper, to improve the performance of this task, a Dense Attentional Residual Network (DAR-Net) is proposed. The proposed network employs a novel dense waterfall residual block (DW res-block) to effectively preserve the spatial information and extract high-level semantic information at the same time. A multiscale receptive field attention (MRFA) module is also designed to select the informative feature from the feature maps and enhance the ability of multiscale perception. Based on the DW res-block and MRFA module, to protect the spatial information, the proposed framework adopts a new backbone that only downsamples the feature map 3 times; i.e., the total downsampling ratio of the proposed backbone is 8. These designs could alleviate the degradation problem, improve the information flow, and strengthen the feature reuse. In addition, deep-projection units are used to reduce the impact of information loss caused by downsampling operations, and the identity mapping is applied to each stage of the proposed backbone to further improve the information flow. The proposed DAR-Net is evaluated on VEDAI, UCAS-AOD, and DOTA datasets. The experimental results demonstrate that the proposed framework outperforms other state-of-the-art algorithms.


2021 ◽  
Author(s):  
Junying Huang ◽  
Fan Chen ◽  
Liang Lin ◽  
dongyu zhang

Aiming at recognizing and localizing the object of novel categories by a few reference samples, few-shot object detection is a quite challenging task. Previous works often depend on the fine-tuning process to transfer their model to the novel category and rarely consider the defect of fine-tuning, resulting in many drawbacks. For example, these methods are far from satisfying in the low-shot or episode-based scenarios since the fine-tuning process in object detection requires much time and high-shot support data. To this end, this paper proposes a plug-and-play few-shot object detection (PnP-FSOD) framework that can accurately and directly detect the objects of novel categories without the fine-tuning process. To accomplish the objective, the PnP-FSOD framework contains two parallel techniques to address the core challenges in the few-shot learning, i.e., across-category task and few-annotation support. Concretely, we first propose two simple but effective meta strategies for the box classifier and RPN module to enable the across-category object detection without fine-tuning. Then, we introduce two explicit inferences into the localization process to reduce its dependence on the annotated data, including explicit localization score and semi-explicit box regression. In addition to the PnP-FSOD framework, we propose a novel one-step tuning method that can avoid the defects in fine-tuning. It is noteworthy that the proposed techniques and tuning method are based on the general object detector without other prior methods, so they are easily compatible with the existing FSOD methods. Extensive experiments show that the PnP-FSOD framework has achieved the state-of-the-art few-shot object detection performance without any tuning method. After applying the one-step tuning method, it further shows a significant lead in both efficiency, precision, and recall, under varied few-shot evaluation protocols.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Di Tian ◽  
Yi Han ◽  
Biyao Wang ◽  
Tian Guan ◽  
Wei Wei

Pedestrian detection is a specific application of object detection. Compared with general object detection, it shows similarities and unique characteristics. In addition, it has important application value in the fields of intelligent driving and security monitoring. In recent years, with the rapid development of deep learning, pedestrian detection technology has also made great progress. However, there still exists a huge gap between it and human perception. Meanwhile, there are still a lot of problems, and there remains a lot of room for research. Regarding the application of pedestrian detection in intelligent driving technology, it is of necessity to ensure its real-time performance. Additionally, it is necessary to lighten the model while ensuring detection accuracy. This paper first briefly describes the development process of pedestrian detection and then concentrates on summarizing the research results of pedestrian detection technology in the deep learning stage. Subsequently, by summarizing the pedestrian detection dataset and evaluation criteria, the core issues of the current development of pedestrian detection are analyzed. Finally, the next possible development direction of pedestrian detection technology is explained at the end of the paper.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5284 ◽  
Author(s):  
Heng Zhang ◽  
Jiayu Wu ◽  
Yanli Liu ◽  
Jia Yu

In recent years, the research on optical remote sensing images has received greater and greater attention. Object detection, as one of the most challenging tasks in the area of remote sensing, has been remarkably promoted by convolutional neural network (CNN)-based methods like You Only Look Once (YOLO) and Faster R-CNN. However, due to the complexity of backgrounds and the distinctive object distribution, directly applying these general object detection methods to the remote sensing object detection usually renders poor performance. To tackle this problem, a highly efficient and robust framework based on YOLO is proposed. We devise and integrate VaryBlock to the architecture which effectively offsets some of the information loss caused by downsampling. In addition, some techniques are utilized to facilitate the performance and to avoid overfitting. Experimental results show that our proposed method can enormously improve the mean average precision by a large margin on the NWPU VHR-10 dataset.


2012 ◽  
Vol 2012 ◽  
pp. 1-19 ◽  
Author(s):  
G. Casalino ◽  
N. Del Buono ◽  
M. Minervini

We study the problem of detecting and localizing objects in still, gray-scale images making use of the part-based representation provided by nonnegative matrix factorizations. Nonnegative matrix factorization represents an emerging example of subspace methods, which is able to extract interpretable parts from a set of template image objects and then to additively use them for describing individual objects. In this paper, we present a prototype system based on some nonnegative factorization algorithms, which differ in the additional properties added to the nonnegative representation of data, in order to investigate if any additional constraint produces better results in general object detection via nonnegative matrix factorizations.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Zeqing Zhang ◽  
Weiwei Lin ◽  
Yuqiang Zheng

Focusing on DOTA, the multidirectional object dataset in aerial view of vehicles, CMDTD has been proposed. The reason why it is difficult for applying the general object detection algorithm in multidirectional object detection has been analyzed in this paper. Based on this, the detection principle of CMDTD including its backbone network and multidirectional multi-information detection end module has been studied. In addition, in view of the complexity of the scene faced by aerial view of vehicles, a unique data expansion method is proposed. At last, three datasets have been experimented using the CMDTD algorithm, proving that the cascaded multidirectional object detection algorithm with high effectiveness is superior to other methods.


Electronics ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 927
Author(s):  
Zhiyu Wang ◽  
Li Wang ◽  
Liang Xiao ◽  
Bin Dai

Three-dimensional object detection based on the LiDAR point cloud plays an important role in autonomous driving. The point cloud distribution of the object varies greatly at different distances, observation angles, and occlusion levels. Besides, different types of LiDARs have different settings of projection angles, thus producing an entirely different point cloud distribution. Pre-trained models on the dataset with annotations may degrade on other datasets. In this paper, we propose a method for object detection using an unsupervised adaptive network, which does not require additional annotation data of the target domain. Our object detection adaptive network consists of a general object detection network, a global feature adaptation network, and a special subcategory instance adaptation network. We divide the source domain data into different subcategories and use a multi-label discriminator to assign labels dynamically to the target domain data. We evaluated our approach on the KITTI object benchmark and proved that the proposed unsupervised adaptive method could achieve a remarkable improvement in the adaptation capabilities.


2020 ◽  
Vol 17 (1) ◽  
pp. 439-444 ◽  
Author(s):  
Katamneni Vinaya Sree ◽  
G. Jeyakumar

In the given image identifying the existence of a required object is the concern of the object detection process. This is quite natural for Human without any effort, however making a machine to detect an object in image is tedious. To make machines to recognize the objects, the feature descriptor algorithms are to be implemented. The general object detection approaches use collection of local and global descriptors to represent an image. Difficulties arise during this process when there is variation in lightening, positioning, rotation, mirroring, occlusion, scaling etc., of the same object in different image scenes. To overcome these difficulties, we need combination of features that detects the object in the image scene. But there exist lot of descriptors that can be used. Hence, finding the required number of feature descriptors for object detection is a crucial task. The question that comes out here is how to select the optimum number of descriptors to achieve optimum accuracy? The answer for the question is an optimization algorithm, which can be employed to select the best combination of the descriptors with maximum detection accuracy. This paper proposing an Evolutionary Computation (EC) based approach with the Differential Evolution (DE) algorithm to find the optimal combination of feature descriptors to achieve optimal object detection accuracy. The proposed approach is implemented and its superiority is verified with four different images and results obtained are presented in this paper.


Sign in / Sign up

Export Citation Format

Share Document