Deep Feature-Based Classifiers for Fruit Fly Identification (Diptera: Tephritidae)

Author(s):  
Matheus Macedo Leonardo ◽  
Tiago J. Carvalho ◽  
Edmar Rezende ◽  
Roberto Zucchi ◽  
Fabio Augusto Faria
Keyword(s):  
2019 ◽  
Vol 1229 ◽  
pp. 012032
Author(s):  
Jun Wang ◽  
Jian Zhou ◽  
Liangding Li ◽  
Jiapeng Chi ◽  
Feiling Yang ◽  
...  
Keyword(s):  

2021 ◽  
Vol 70 ◽  
pp. 1-14
Author(s):  
Mingxi Ai ◽  
Yongfang Xie ◽  
Zhaohui Tang ◽  
Jin Zhang ◽  
Weihua Gui

2019 ◽  
Vol 11 (23) ◽  
pp. 2870
Author(s):  
Chu He ◽  
Qingyi Zhang ◽  
Tao Qu ◽  
Dingwen Wang ◽  
Mingsheng Liao

In the past two decades, traditional hand-crafted feature based methods and deep feature based methods have successively played the most important role in image classification. In some cases, hand-crafted features still provide better performance than deep features. This paper proposes an innovative network based on deep learning integrated with binary coding and Sinkhorn distance (DBSNet) for remote sensing and texture image classification. The statistical texture features of the image extracted by uniform local binary pattern (ULBP) are introduced as a supplement for deep features extracted by ResNet-50 to enhance the discriminability of features. After the feature fusion, both diversity and redundancy of the features have increased, thus we propose the Sinkhorn loss where an entropy regularization term plays a key role in removing redundant information and training the model quickly and efficiently. Image classification experiments are performed on two texture datasets and five remote sensing datasets. The results show that the statistical texture features of the image extracted by ULBP complement the deep features, and the new Sinkhorn loss performs better than the commonly used softmax loss. The performance of the proposed algorithm DBSNet ranks in the top three on the remote sensing datasets compared with other state-of-the-art algorithms.


2018 ◽  
Vol 275 ◽  
pp. 1035-1042 ◽  
Author(s):  
Wenqing Chu ◽  
Deng Cai

2021 ◽  
Vol 11 (19) ◽  
pp. 9202
Author(s):  
Daxue Liu ◽  
Kai Zang ◽  
Jifeng Shen

In this paper, a shallow–deep feature fusion (SDFF) method is developed for pedestrian detection. Firstly, we propose a shallow feature-based method under the ACF framework of pedestrian detection. More precisely, improved Haar-like templates with Local FDA learning are used to filter the channel maps of ACF such that these Haar-like features are able to improve the discriminative power and therefore enhance the detection performance. The proposed shallow feature is also referred to as weighted subset-haar-like feature. It is efficient in pedestrian detection with a high recall rate and precise localization. Secondly, the proposed shallow feature-based detection method operates as a region proposal. A classifier equipped with ResNet is then used to refine the region proposals to judge whether each region contains a pedestrian or not. The extensive experiments evaluated on INRIA, Caltech, and TUD-Brussel datasets show that SDFF is an effective and efficient method for pedestrian detection.


2021 ◽  
Vol 11 ◽  
Author(s):  
Mehdi Astaraki ◽  
Guang Yang ◽  
Yousuf Zakko ◽  
Iuliana Toma-Dasu ◽  
Örjan Smedby ◽  
...  

ObjectivesBoth radiomics and deep learning methods have shown great promise in predicting lesion malignancy in various image-based oncology studies. However, it is still unclear which method to choose for a specific clinical problem given the access to the same amount of training data. In this study, we try to compare the performance of a series of carefully selected conventional radiomics methods, end-to-end deep learning models, and deep-feature based radiomics pipelines for pulmonary nodule malignancy prediction on an open database that consists of 1297 manually delineated lung nodules.MethodsConventional radiomics analysis was conducted by extracting standard handcrafted features from target nodule images. Several end-to-end deep classifier networks, including VGG, ResNet, DenseNet, and EfficientNet were employed to identify lung nodule malignancy as well. In addition to the baseline implementations, we also investigated the importance of feature selection and class balancing, as well as separating the features learned in the nodule target region and the background/context region. By pooling the radiomics and deep features together in a hybrid feature set, we investigated the compatibility of these two sets with respect to malignancy prediction.ResultsThe best baseline conventional radiomics model, deep learning model, and deep-feature based radiomics model achieved AUROC values (mean ± standard deviations) of 0.792 ± 0.025, 0.801 ± 0.018, and 0.817 ± 0.032, respectively through 5-fold cross-validation analyses. However, after trying out several optimization techniques, such as feature selection and data balancing, as well as adding context features, the corresponding best radiomics, end-to-end deep learning, and deep-feature based models achieved AUROC values of 0.921 ± 0.010, 0.824 ± 0.021, and 0.936 ± 0.011, respectively. We achieved the best prediction accuracy from the hybrid feature set (AUROC: 0.938 ± 0.010).ConclusionThe end-to-end deep-learning model outperforms conventional radiomics out of the box without much fine-tuning. On the other hand, fine-tuning the models lead to significant improvements in the prediction performance where the conventional and deep-feature based radiomics models achieved comparable results. The hybrid radiomics method seems to be the most promising model for lung nodule malignancy prediction in this comparative study.


Sign in / Sign up

Export Citation Format

Share Document