Revisiting knowledge distillation for light-weight visual object detection

2021 ◽  
Vol 43 (13) ◽  
pp. 2888-2898
Author(s):  
Tianze Gao ◽  
Yunfeng Gao ◽  
Yu Li ◽  
Peiyuan Qin

An essential element for intelligent perception in mechatronic and robotic systems (M&RS) is the visual object detection algorithm. With the ever-increasing advance of artificial neural networks (ANN), researchers have proposed numerous ANN-based visual object detection methods that have proven to be effective. However, networks with cumbersome structures do not befit the real-time scenarios in M&RS, necessitating the techniques of model compression. In the paper, a novel approach to training light-weight visual object detection networks is developed by revisiting knowledge distillation. Traditional knowledge distillation methods are oriented towards image classification is not compatible with object detection. Therefore, a variant of knowledge distillation is developed and adapted to a state-of-the-art keypoint-based visual detection method. Two strategies named as positive sample retaining and early distribution softening are employed to yield a natural adaption. The mutual consistency between teacher model and student model is further promoted through a hint-based distillation. By extensive controlled experiments, the proposed method is testified to be effective in enhancing the light-weight network’s performance by a large margin.

Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1235
Author(s):  
Yang Yang ◽  
Hongmin Deng

In order to make the classification and regression of single-stage detectors more accurate, an object detection algorithm named Global Context You-Only-Look-Once v3 (GC-YOLOv3) is proposed based on the You-Only-Look-Once (YOLO) in this paper. Firstly, a better cascading model with learnable semantic fusion between a feature extraction network and a feature pyramid network is designed to improve detection accuracy using a global context block. Secondly, the information to be retained is screened by combining three different scaling feature maps together. Finally, a global self-attention mechanism is used to highlight the useful information of feature maps while suppressing irrelevant information. Experiments show that our GC-YOLOv3 reaches a maximum of 55.5 object detection mean Average Precision (mAP)@0.5 on Common Objects in Context (COCO) 2017 test-dev and that the mAP is 5.1% higher than that of the YOLOv3 algorithm on Pascal Visual Object Classes (PASCAL VOC) 2007 test set. Therefore, experiments indicate that the proposed GC-YOLOv3 model exhibits optimal performance on the PASCAL VOC and COCO datasets.


2019 ◽  
Vol 9 (9) ◽  
pp. 1829 ◽  
Author(s):  
Jie Jiang ◽  
Hui Xu ◽  
Shichang Zhang ◽  
Yujie Fang

This study proposes a multiheaded object detection algorithm referred to as MANet. The main purpose of the study is to integrate feature layers of different scales based on the attention mechanism and to enhance contextual connections. To achieve this, we first replaced the feed-forward base network of the single-shot detector with the ResNet–101 (inspired by the Deconvolutional Single-Shot Detector) and then applied linear interpolation and the attention mechanism. The information of the feature layers at different scales was fused to improve the accuracy of target detection. The primary contributions of this study are the propositions of (a) a fusion attention mechanism, and (b) a multiheaded attention fusion method. Our final MANet detector model effectively unifies the feature information among the feature layers at different scales, thus enabling it to detect objects with different sizes and with higher precision. We used the 512 × 512 input MANet (the backbone is ResNet–101) to obtain a mean accuracy of 82.7% based on the PASCAL visual object class 2007 test. These results demonstrated that our proposed method yielded better accuracy than those provided by the conventional Single-shot detector (SSD) and other advanced detectors.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5284 ◽  
Author(s):  
Heng Zhang ◽  
Jiayu Wu ◽  
Yanli Liu ◽  
Jia Yu

In recent years, the research on optical remote sensing images has received greater and greater attention. Object detection, as one of the most challenging tasks in the area of remote sensing, has been remarkably promoted by convolutional neural network (CNN)-based methods like You Only Look Once (YOLO) and Faster R-CNN. However, due to the complexity of backgrounds and the distinctive object distribution, directly applying these general object detection methods to the remote sensing object detection usually renders poor performance. To tackle this problem, a highly efficient and robust framework based on YOLO is proposed. We devise and integrate VaryBlock to the architecture which effectively offsets some of the information loss caused by downsampling. In addition, some techniques are utilized to facilitate the performance and to avoid overfitting. Experimental results show that our proposed method can enormously improve the mean average precision by a large margin on the NWPU VHR-10 dataset.


2015 ◽  
Vol 03 (04) ◽  
pp. 253-266 ◽  
Author(s):  
Maxime Derome ◽  
Aurelien Plyer ◽  
Martial Sanfourche ◽  
Guy Le Besnerais

This paper presents a mobile object detection algorithm which performs with two consecutive stereo images. Like most motion detection methods, the proposed one is based on dense stereo matching and optical flow (OF) estimation. Noting that the main computational cost of existing methods is related to the estimation of OF, we propose to use a fast algorithm based on Lucas–Kanade paradigm. We then derive a comprehensive uncertainty model by taking into account all the estimation errors occurring during the process. In contrast with most previous works, we rigorously expand the error related to vision based ego-motion estimation. Finally, we present a comparative study of performance on the challenging KITTI dataset which demonstrates the effectiveness of the proposed approach.


2014 ◽  
Vol 596 ◽  
pp. 361-364
Author(s):  
Jun Zhou

In many applications of modern world, moving object detection is an important work. Especially, the detection of moving object under complicated background has more difficult. On the basis of introduction of conventional detection methods such as temporal difference detection (TDD) and background subtraction detection, the paper proposes a new mathematical morphology based moving object detection algorithm in view of complicated background. The proposed algorithm utilizes the opening and closing operations to process the temporal differenced image. Then, it compares the processed image with the background differenced image. The experimental results show that the proposed algorithm can be able to detect the moving object effectively under complicatedly external environment and has a higher detection precision in contrast with the conventional temporal detection method..


2021 ◽  
Vol 2083 (4) ◽  
pp. 042028
Author(s):  
Zhihao Liang

Abstract As a common method of model compression, the knowledge distillation method can distill the knowledge from the complex large model with strong learning ability to student small model with weak learning ability in the training process, to improve the accuracy and performance of the small model. At present, there has been much knowledge distillation methods specially designed for object detection and achieved good results. However, almost all methods failed to solve the problem of performance degradation caused by the high noise in the current detection framework. In this study, we proposed a feature automatic weight learning method based on EMD to solve these two problems. That is, the EMD method is used to process the space vector to reduce the impact of negative transfer and noise as much as possible, and at the same time, the weights are allocated adaptive to reduce student model’s learning from the teacher model with poor performance and make students more inclined to learn from good teachers. The loss (EMD Loss) was redesigned, and the HEAD was improved to fit our approach. We have carried out different comprehensive performance tests on multiple datasets, including PASCAL, KITTI, ILSVRC, and MS-COCO, and obtained encouraging results, which can not only be applied to the one-stage and two-stage detectors but also can be used radiatively with some other methods.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 537 ◽  
Author(s):  
Liquan Zhao ◽  
Shuaiyang Li

The ‘You Only Look Once’ v3 (YOLOv3) method is among the most widely used deep learning-based object detection methods. It uses the k-means cluster method to estimate the initial width and height of the predicted bounding boxes. With this method, the estimated width and height are sensitive to the initial cluster centers, and the processing of large-scale datasets is time-consuming. In order to address these problems, a new cluster method for estimating the initial width and height of the predicted bounding boxes has been developed. Firstly, it randomly selects a couple of width and height values as one initial cluster center separate from the width and height of the ground truth boxes. Secondly, it constructs Markov chains based on the selected initial cluster and uses the final points of every Markov chain as the other initial centers. In the construction of Markov chains, the intersection-over-union method is used to compute the distance between the selected initial clusters and each candidate point, instead of the square root method. Finally, this method can be used to continually update the cluster center with each new set of width and height values, which are only a part of the data selected from the datasets. Our simulation results show that the new method has faster convergence speed for initializing the width and height of the predicted bounding boxes and that it can select more representative initial widths and heights of the predicted bounding boxes. Our proposed method achieves better performance than the YOLOv3 method in terms of recall, mean average precision, and F1-score.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3994 ◽  
Author(s):  
Ahmad Delforouzi ◽  
Bhargav Pamarthi ◽  
Marcin Grzegorzek

Object tracking in challenging videos is a hot topic in machine vision. Recently, novel training-based detectors, especially using the powerful deep learning schemes, have been proposed to detect objects in still images. However, there is still a semantic gap between the object detectors and higher level applications like object tracking in videos. This paper presents a comparative study of outstanding learning-based object detectors such as ACF, Region-Based Convolutional Neural Network (RCNN), FastRCNN, FasterRCNN and You Only Look Once (YOLO) for object tracking. We use an online and offline training method for tracking. The online tracker trains the detectors with a generated synthetic set of images from the object of interest in the first frame. Then, the detectors detect the objects of interest in the next frames. The detector is updated online by using the detected objects from the last frames of the video. The offline tracker uses the detector for object detection in still images and then a tracker based on Kalman filter associates the objects among video frames. Our research is performed on a TLD dataset which contains challenging situations for tracking. Source codes and implementation details for the trackers are published to make both the reproduction of the results reported in this paper and the re-use and further development of the trackers for other researchers. The results demonstrate that ACF and YOLO trackers show more stability than the other trackers.


2021 ◽  
Vol 13 (18) ◽  
pp. 3776
Author(s):  
Linlin Zhu ◽  
Xun Geng ◽  
Zheng Li ◽  
Chun Liu

It is of great significance to apply the object detection methods to automatically detect boulders from planetary images and analyze their distribution. This contributes to the selection of candidate landing sites and the understanding of the geological processes. This paper improves the state-of-the-art object detection method of YOLOv5 with attention mechanism and designs a pyramid based approach to detect boulders from planetary images. A new feature fusion layer has been designed to capture more shallow features of the small boulders. The attention modules implemented by combining the convolutional block attention module (CBAM) and efficient channel attention network (ECA-Net) are also added into YOLOv5 to highlight the information that contribute to boulder detection. Based on the Pascal Visual Object Classes 2007 (VOC2007) dataset which is widely used for object detection evaluations and the boulder dataset that we constructed from the images of Bennu asteroid, the evaluation results have shown that the improvements have increased the performance of YOLOv5 by 3.4% in precision. With the improved YOLOv5 detection method, the pyramid based approach extracts several layers of images with different resolutions from the large planetary images and detects boulders of different scales from different layers. We have also applied the proposed approach to detect the boulders on Bennu asteroid. The distribution of the boulders on Bennu asteroid has been analyzed and presented.


Sign in / Sign up

Export Citation Format

Share Document