Spatial likelihood voting with self-knowledge distillation for weakly supervised object detection

2021 ◽  
pp. 104314
Author(s):  
Ze Chen ◽  
Zhihang Fu ◽  
Jianqiang Huang ◽  
Mingyuan Tao ◽  
Rongxin Jiang ◽  
...  
2021 ◽  
Author(s):  
Danpei Zhao ◽  
Zhichao Yuan ◽  
Zhenwei Shi ◽  
Fengying Xie

Author(s):  
Na Dong ◽  
Yongqiang Zhang ◽  
Mingli Ding ◽  
Shibiao Xu ◽  
Yancheng Bai

2021 ◽  
Vol 43 (13) ◽  
pp. 2888-2898
Author(s):  
Tianze Gao ◽  
Yunfeng Gao ◽  
Yu Li ◽  
Peiyuan Qin

An essential element for intelligent perception in mechatronic and robotic systems (M&RS) is the visual object detection algorithm. With the ever-increasing advance of artificial neural networks (ANN), researchers have proposed numerous ANN-based visual object detection methods that have proven to be effective. However, networks with cumbersome structures do not befit the real-time scenarios in M&RS, necessitating the techniques of model compression. In the paper, a novel approach to training light-weight visual object detection networks is developed by revisiting knowledge distillation. Traditional knowledge distillation methods are oriented towards image classification is not compatible with object detection. Therefore, a variant of knowledge distillation is developed and adapted to a state-of-the-art keypoint-based visual detection method. Two strategies named as positive sample retaining and early distribution softening are employed to yield a natural adaption. The mutual consistency between teacher model and student model is further promoted through a hint-based distillation. By extensive controlled experiments, the proposed method is testified to be effective in enhancing the light-weight network’s performance by a large margin.


Author(s):  
Jeany Son ◽  
Daniel Kim ◽  
Solae Lee ◽  
Suha Kwak ◽  
Minsu Cho ◽  
...  

2019 ◽  
Vol 41 (10) ◽  
pp. 2395-2409 ◽  
Author(s):  
Fang Wan ◽  
Pengxu Wei ◽  
Zhenjun Han ◽  
Jianbin Jiao ◽  
Qixiang Ye

2020 ◽  
Vol 34 (07) ◽  
pp. 10778-10785
Author(s):  
Linpu Fang ◽  
Hang Xu ◽  
Zhili Liu ◽  
Sarah Parisot ◽  
Zhenguo Li

Object detectors trained on fully-annotated data currently yield state of the art performance but require expensive manual annotations. On the other hand, weakly-supervised detectors have much lower performance and cannot be used reliably in a realistic setting. In this paper, we study the hybrid-supervised object detection problem, aiming to train a high quality detector with only a limited amount of fully-annotated data and fully exploiting cheap data with image-level labels. State of the art methods typically propose an iterative approach, alternating between generating pseudo-labels and updating a detector. This paradigm requires careful manual hyper-parameter tuning for mining good pseudo labels at each round and is quite time-consuming. To address these issues, we present EHSOD, an end-to-end hybrid-supervised object detection system which can be trained in one shot on both fully and weakly-annotated data. Specifically, based on a two-stage detector, we proposed two modules to fully utilize the information from both kinds of labels: 1) CAM-RPN module aims at finding foreground proposals guided by a class activation heat-map; 2) hybrid-supervised cascade module further refines the bounding-box position and classification with the help of an auxiliary head compatible with image-level data. Extensive experiments demonstrate the effectiveness of the proposed method and it achieves comparable results on multiple object detection benchmarks with only 30% fully-annotated data, e.g. 37.5% mAP on COCO. We will release the code and the trained models.


2020 ◽  
Vol 27 ◽  
pp. 1864-1868
Author(s):  
Ruibing Jin ◽  
Guosheng Lin ◽  
Changyun Wen ◽  
Jianliang Wang

Sign in / Sign up

Export Citation Format

Share Document