hard samples
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 20)

H-INDEX

3
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Chih-Ting Liu ◽  
Man-Yu Lee ◽  
Tsai-Shien Chen ◽  
Shao-Yi Chien
Keyword(s):  

2021 ◽  
Author(s):  
ke wang ◽  
Lianhua Zhang ◽  
Qin Xia ◽  
Liang Pu ◽  
Junlan Chen

Abstract Convolutional neural networks (CNN) based object detection usually assumes that training and test data have the same distribution, which, however, does not always hold in real-world applications. In autonomous vehicles, the driving scene (target domain) consists of unconstrained road environments which cannot all possibly be observed in training data (source domain) and this will lead to a sharp drop in the accuracy of the detector. In this paper, we propose a domain adaptation framework based on pseudo-labels to solve the domain shift. First, the pseudo-labels of the target domain images are generated by the baseline detector (BD) and optimized by our data optimization module to correct the errors. Then, the hard samples in a single image are labeled based on the optimization results of pseudo-labels. The adaptive sampling module is approached to sample target domain data according to the number of hard samples per image to select more effective data. Finally, a modified knowledge distillation loss is applied in the retraining module, and we investigate two ways of assigning soft-labels to the training examples from the target domain to retrain the detector. We evaluate the average precision of our approach in various source/target domain pairs and demonstrate that the framework improves over 10% average precision of BD on multiple domain adaptation scenarios on the Cityscapes, KITTI, and Apollo datasets.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7036
Author(s):  
Chao Han ◽  
Xiaoyang Li ◽  
Zhen Yang ◽  
Deyun Zhou ◽  
Yiyang Zhao ◽  
...  

Domain adaptation aims to handle the distribution mismatch of training and testing data, which achieves dramatic progress in multi-sensor systems. Previous methods align the cross-domain distributions by some statistics, such as the means and variances. Despite their appeal, such methods often fail to model the discriminative structures existing within testing samples. In this paper, we present a sample-guided adaptive class prototype method, which consists of the no distribution matching strategy. Specifically, two adaptive measures are proposed. Firstly, the modified nearest class prototype is raised, which allows more diversity within same class, while keeping most of the class wise discrimination information. Secondly, we put forward an easy-to-hard testing scheme by taking into account the different difficulties in recognizing target samples. Easy samples are classified and selected to assist the prediction of hard samples. Extensive experiments verify the effectiveness of the proposed method.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5786
Author(s):  
Lei Guo ◽  
Gang Xie ◽  
Xinying Xu ◽  
Jinchang Ren

Melanoma recognition is challenging due to data imbalance and high intra-class variations and large inter-class similarity. Aiming at the issues, we propose a melanoma recognition method using deep convolutional neural network with covariance discriminant loss in dermoscopy images. Deep convolutional neural network is trained under the joint supervision of cross entropy loss and covariance discriminant loss, rectifying the model outputs and the extracted features simultaneously. Specifically, we design an embedding loss, namely covariance discriminant loss, which takes the first and second distance into account simultaneously for providing more constraints. By constraining the distance between hard samples and minority class center, the deep features of melanoma and non-melanoma can be separated effectively. To mine the hard samples, we also design the corresponding algorithm. Further, we analyze the relationship between the proposed loss and other losses. On the International Symposium on Biomedical Imaging (ISBI) 2018 Skin Lesion Analysis dataset, the two schemes in the proposed method can yield a sensitivity of 0.942 and 0.917, respectively. The comprehensive results have demonstrated the efficacy of the designed embedding loss and the proposed methodology.


2020 ◽  
Vol 7 (10) ◽  
pp. 9611-9622 ◽  
Author(s):  
Hao Sheng ◽  
Yanwei Zheng ◽  
Wei Ke ◽  
Dongxiao Yu ◽  
Xiuzhen Cheng ◽  
...  
Keyword(s):  

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4709
Author(s):  
Bin Wang ◽  
Yinjuan Gu

With the development of artificial intelligence and big data analytics, an increasing number of researchers have tried to use deep-learning technology to train neural networks and achieved great success in the field of vehicle detection. However, as a special domain of object detection, vehicle detection in aerial images still has made limited progress because of low resolution, complex backgrounds and rotating objects. In this paper, an improved feature-balanced pyramid network (FBPN) has been proposed to enhance the network’s ability to detect small objects. By combining FBPN with modified faster region convolutional neural network (faster-RCNN), a vehicle detection framework for aerial images is proposed. The focal loss function is adopted in the proposed framework to reduce the imbalance between easy and hard samples. The experimental results based on the VEDIA, USCAS-AOD, and DOTA datasets show that the proposed framework outperforms other state-of-the-art vehicle detection algorithms for aerial images.


Sign in / Sign up

Export Citation Format

Share Document