scholarly journals A Transfer Learning Approach for the Tattoo Detection Problem

2021 ◽  
Author(s):  
Rodrigo Tchalski Silva ◽  
Heitor Silvério Lopes

Tattoos are still poorly explored as a biometrics factor for human identification, especially in public security, where tattoos can play an important role for identifying criminals and victims. Tattoos are considered a soft biometrics, since they are not permanent and can change along time, differently from hard biometrics traits (fingerprint, iris, DNA, etc). The identification of tattoos are not simple, since they do not have a definite pattern or location. This fact increases the complexity of developing models to address this problem. In addition, the tattoo identification roadmap is very complex, including several steps and, in each step, specific methods need to be developed. Among the several problems identified in this roadmap, we tacked the identification problem, which is defined as: given an image of a person, determine if there is a tattoo or not. We present a deep learning model based on transfer learning for the tattoo detection problem. We also used data augmentation to improve the diversity of the training sets so as to achieve better classification accuracy. Along the work two new datasets for tattoo detection were created. Several comparative experiments were done to evaluate the diversity of images in the datasets, and the accuracy of the proposed model. Results were very promising, achieving an accuracy of 95.1% in the test set, and a F1-score of 0.79 in an external dataset. Overall, results were satisfactory, given the complexity of the problem. Future work will focus on expanding the datasets created and addressing the other problems of the tattoo roadmap.

2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Aleksei Grigorev ◽  
Zhihong Tian ◽  
Seungmin Rho ◽  
Jianxin Xiong ◽  
Shaohui Liu ◽  
...  

AbstractThe person re-identification is one of the most significant problems in computer vision and surveillance systems. The recent success of deep convolutional neural networks in image classification has inspired researchers to investigate the application of deep learning to the person re-identification. However, the huge amount of research on this problem considers classical settings, where pedestrians are captured by static surveillance cameras, although there is a growing demand for analyzing images and videos taken by drones. In this paper, we aim at filling this gap and provide insights on the person re-identification from drones. To our knowledge, it is the first attempt to tackle this problem under such constraints. We present the person re-identification dataset, named DRone HIT (DRHIT01), which is collected by using a drone. It contains 101 unique pedestrians, which are annotated with their identities. Each pedestrian has about 500 images. We propose to use a combination of triplet and large-margin Gaussian mixture (L-GM) loss to tackle the drone-based person re-identification problem. The proposed network equipped with multi-branch design, channel group learning, and combination of loss functions is evaluated on the DRHIT01 dataset. Besides, transfer learning from the most popular person re-identification datasets is evaluated. Experiment results demonstrate the importance of transfer learning and show that the proposed model outperforms the classic deep learning approach.


Cancers ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1590
Author(s):  
Laith Alzubaidi ◽  
Muthana Al-Amidie ◽  
Ahmed Al-Asadi ◽  
Amjad J. Humaidi ◽  
Omran Al-Shamma ◽  
...  

Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.


2021 ◽  
Vol 71 (2) ◽  
pp. 200-208
Author(s):  
Narendra Kumar Mishra ◽  
Ashok Kumar ◽  
Kishor Choudhury

Ships are an integral part of maritime traffic where they play both militaries as well as non-combatant roles. This vast maritime traffic needs to be managed and monitored by identifying and recognising vessels to ensure the maritime safety and security. As an approach to find an automated and efficient solution, a deep learning model exploiting convolutional neural network (CNN) as a basic building block, has been proposed in this paper. CNN has been predominantly used in image recognition due to its automatic high-level features extraction capabilities and exceptional performance. We have used transfer learning approach using pre-trained CNNs based on VGG16 architecture to develop an algorithm that performs the different ship types classification. This paper adopts data augmentation and fine-tuning to further improve and optimize the baseline VGG16 model. The proposed model attains an average classification accuracy of 97.08% compared to the average classification accuracy of 88.54% obtained from the baseline model.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Omar Faruk ◽  
Eshan Ahmed ◽  
Sakil Ahmed ◽  
Anika Tabassum ◽  
Tahia Tazin ◽  
...  

Deep learning has emerged as a promising technique for a variety of elements of infectious disease monitoring and detection, including tuberculosis. We built a deep convolutional neural network (CNN) model to assess the generalizability of the deep learning model using a publicly accessible tuberculosis dataset. This study was able to reliably detect tuberculosis (TB) from chest X-ray images by utilizing image preprocessing, data augmentation, and deep learning classification techniques. Four distinct deep CNNs (Xception, InceptionV3, InceptionResNetV2, and MobileNetV2) were trained, validated, and evaluated for the classification of tuberculosis and nontuberculosis cases using transfer learning from their pretrained starting weights. With an F1-score of 99 percent, InceptionResNetV2 had the highest accuracy. This research is more accurate than earlier published work. Additionally, it outperforms all other models in terms of reliability. The suggested approach, with its state-of-the-art performance, may be helpful for computer-assisted rapid TB detection.


2020 ◽  
Vol 9 (2) ◽  
pp. 100-110
Author(s):  
Ahmad Mustafid ◽  
Muhammad Murah Pamuji ◽  
Siti Helmiyah

Deep Learning is an essential technique in the classification problem in machine learning based on artificial neural networks. The general issue in deep learning is data-hungry, which require a plethora of data to train some model. Wayang is a shadow puppet art theater from Indonesia, especially in the Javanese culture. It has several indistinguishable characters. In this paper, We tried proposing some steps and techniques on how to classify the characters and handle the issue on a small wayang dataset by using model selection, transfer learning, and fine-tuning to obtain efficient and precise accuracy on our classification problem. The research used 50 images for each class and a total of 24 wayang characters classes. We collected and implemented various architectures from the initial version of deep learning to the latest proposed model and their state-of-art. The transfer learning and fine-tuning method showed a significant increase in accuracy, validation accuracy. By using Transfer Learning, it was possible to design the deep learning model with good classifiers within a short number of times on a small dataset. It performed 100% on their training on both EfficientNetB0 and MobileNetV3-small. On validation accuracy, gave 98.33% and 98.75%, respectively.


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Thomas Haugland Johansen ◽  
Steffen Aagaard Sørensen

Foraminifera are single-celled marine organisms, which may have a planktic or benthic lifestyle. During their life cycle they construct shells consisting of one or more chambers, and these shells remain as fossils in marine sediments. Classifying and counting these fossils have become an important tool in e.g. oceanography and climatology.Currently the process of identifying and counting microfossils is performed manually using a microscope and is very time consuming. Developing methods to automate this process is therefore considered important across a range of research fields.The first steps towards developing a deep learning model that can detect and classify microscopic foraminifera are proposed. The proposed model is based on a VGG16 model that has been pretrained on the ImageNet dataset, and adapted to the foraminifera task using transfer learning. Additionally, a novel image dataset consisting of microscopic foraminifera and sediments from the Barents Sea region is introduced.


Author(s):  
Yidnekachew Kibru Afework ◽  
Taye Girma Debelee

Bacterial Wilt disease is the most determinant factor as it results in a serious reduction in the quality and quantity of food produced by Enset crop. Therefore, early detection of Bacterial Wilt disease is important to diagnose and fight the disease. To this end, a deep learning approach that can detect the disease by using healthy and infected leave images of the crop is proposed. In particular, a convolutional neural network architecture is designed to classify the images collected from different farms as diseased or healthy. A total of 4896 images that were captured directly from the farm with the help of experts in the field of agriculture was used to train the proposed model. The proposed model was trained using these images and data augmentation techniques was applied to generate more images. Besides training the proposed model, a pre-trained model namely VGG16 is trained by using our dataset. The proposed model achieved a mean accuracy of 98.5% and the VGG16 pre-trained model achieved a mean accuracy of 96.6% by using a mini-batch size of 32 and a learning rate of 0.001. The preliminary results demonstrated that the effectiveness of the proposed approach under challenging conditions such as illumination, complex background, different resolutions, variable scale, rotation, and orientation of the real scene images.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7545
Author(s):  
Md Mahibul Hasan ◽  
Zhijie Wang ◽  
Muhammad Ather Iqbal Hussain ◽  
Kaniz Fatima

Vehicle type classification plays an essential role in developing an intelligent transportation system (ITS). Based on the modern accomplishments of deep learning (DL) on image classification, we proposed a model based on transfer learning, incorporating data augmentation, for the recognition and classification of Bangladeshi native vehicle types. An extensive dataset of Bangladeshi native vehicles, encompassing 10,440 images, was developed. Here, the images are categorized into 13 common vehicle classes in Bangladesh. The method utilized was a residual network (ResNet-50)-based model, with extra classification blocks added to improve performance. Here, vehicle type features were automatically extracted and categorized. While conducting the analysis, a variety of metrics was used for the evaluation, including accuracy, precision, recall, and F1 − Score. In spite of the changing physical properties of the vehicles, the proposed model achieved progressive accuracy. Our proposed method surpasses the existing baseline method as well as two pre-trained DL approaches, AlexNet and VGG-16. Based on result comparisons, we have seen that, in the classification of Bangladeshi native vehicle types, our suggested ResNet-50 pre-trained model achieves an accuracy of 98.00%.


2021 ◽  
Author(s):  
Japman Singh Monga ◽  
Yuvraj Singh Champawat ◽  
Seema Kharb

Abstract In the year 2020 world came to a halt due to spread of Covid-19 or SARS-CoV2 which was first identified in Wuhan, China. Since then, it has caused plethora of problems around the globe such as loss of millions of lives, economic instability etc. Less effectiveness of detection through Reverse Transcription Polymerase Chain Reaction and also prolonged time needed for detection through the same calls for a substitute for Covid-19 detection. Hence, in this study, we aim to develop a transfer learning based multi-class classifier using Chest X-Ray images which will classify the X-Ray images in 3 classes (Covid-19, Pneumonia, Normal). Further, the proposed model has been trained with deep learning classifiers namely: DenseNet201, Xception, ResNet50V2, VGG16, VGG-19, InceptionResNetV2 .These are evaluated on the basis of accuracy, precision and recall as performance parameters. It has been observed that DenseNet201 is the best deep learning model with 82.2% accuracy.


Author(s):  
L. Bergamasco ◽  
S. Saha ◽  
F. Bovolo ◽  
L. Bruzzone

Abstract. Transfer learning methods reuse a deep learning model developed for a task on another task. Such methods have been remarkably successful in a wide range of image processing applications. Following the trend, few transfer learning based methods have been proposed for unsupervised multi-temporal image analysis and change detection (CD). Inspite of their success, the transfer learning based CD methods suffer from limited explainability. In this paper, we propose an explainable convolutional autoencoder model for CD. The model is trained in: 1) an unsupervised way using, as the bi-temporal images, patches extracted from the same geographic location; 2) a greedy fashion, one encoder and decoder layer pair at a time. A number of features relevant for CD is chosen from the encoder layer. To build an explainable model, only selected features from the encoder layer is retained and the rest is discarded. Following this, another encoder and decoder layer pair is added to the model in similar fashion until convergence. We further visualize the features to better interpret the learned features. We validated the proposed method on a Landsat-8 dataset obtained in Spain. Using a set of experiments, we demonstrate the explainability and effectiveness of the proposed model.


Sign in / Sign up

Export Citation Format

Share Document