scholarly journals Automatic Identification of Harmful Algae Based On Multiple Convolutional Neural Networks and Transfer Learning

Author(s):  
Mengyu Yang ◽  
Wensi Wang ◽  
Qiang Gao ◽  
Chen Zhao ◽  
Caole Li ◽  
...  

Abstract The monitoring of harmful algae is very important for the maintenance of the aquatic ecological environment. Traditional algae monitoring methods require professionals with substantial experience in algae species, which are time-consuming, expensive and limited in practice. The automatic classification of algae cell images and the identification of harmful algae images were realized by the combination of multiple Convolutional Neural Networks (CNNs) and deep learning techniques based on transfer learning in this work. 11 common harmful and 31 harmless algae genera were collected as input samples, the five CNNs classification models of AlexNet, VGG16, GoogLeNet, ResNet50, and MobileNetV2 were fine-tuned to automatically classify algae images, and the average accuracy was improved 11.9% when compared to models without fine-tuning. In order to monitor harmful algae which can cause red tides or produce toxins severely polluting drinking water, a new identification method of harmful algae which combines the recognition results of five CNN models was proposed, and the recall rate reached 98.0%. The experimental results validate that the recognition performance of harmful algae could be significantly improved by transfer learning, and the proposed identification method is effective in the preliminary screening of harmful algae and greatly reduces the workload of professional personnel.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Emre Kiyak ◽  
Gulay Unal

Purpose The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft. Design/methodology/approach First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed. Findings The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%. Originality/value Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.


2020 ◽  
Vol 9 (2) ◽  
pp. 392 ◽  
Author(s):  
Ki-Sun Lee ◽  
Seok-Ki Jung ◽  
Jae-Jun Ryu ◽  
Sang-Wan Shin ◽  
Jinwook Choi

Dental panoramic radiographs (DPRs) provide information required to potentially evaluate bone density changes through a textural and morphological feature analysis on a mandible. This study aims to evaluate the discriminating performance of deep convolutional neural networks (CNNs), employed with various transfer learning strategies, on the classification of specific features of osteoporosis in DPRs. For objective labeling, we collected a dataset containing 680 images from different patients who underwent both skeletal bone mineral density and digital panoramic radiographic examinations at the Korea University Ansan Hospital between 2009 and 2018. Four study groups were used to evaluate the impact of various transfer learning strategies on deep CNN models as follows: a basic CNN model with three convolutional layers (CNN3), visual geometry group deep CNN model (VGG-16), transfer learning model from VGG-16 (VGG-16_TF), and fine-tuning with the transfer learning model (VGG-16_TF_FT). The best performing model achieved an overall area under the receiver operating characteristic of 0.858. In this study, transfer learning and fine-tuning improved the performance of a deep CNN for screening osteoporosis in DPR images. In addition, using the gradient-weighted class activation mapping technique, a visual interpretation of the best performing deep CNN model indicated that the model relied on image features in the lower left and right border of the mandibular. This result suggests that deep learning-based assessment of DPR images could be useful and reliable in the automated screening of osteoporosis patients.


Author(s):  
Jun Gao ◽  
Qian Jiang ◽  
Bo Zhou ◽  
Daozheng Chen

Aim and Objective: Lung nodule detection is critical in improving the five-year survival rate and reducing mortality for patients with lung cancer. Numerous methods based on convolutional neural networks (CNNs) have been proposed for lung nodule detection in computed tomography (CT) images. With the collaborative development of computer hardware technology, the detection accuracy and efficiency can still be improved. Materials and Methods: In this study, an automatic lung nodule detection method using CNNs with transfer learning is presented. We first compare three of the state-of-the-art convolutional neural network (CNN) models, namely, VGG16, VGG19 and ResNet50, to determine the most suitable model for lung nodule detection. We then utilize two different training strategies, namely, freezing layers and fine-tuning, to illustrate the effectiveness of transfer learning. Furthermore, the hyper-parameters of the CNN model such as optimizer, batch size and epoch are optimized. Results: Evaluated on the Lung Nodule Analysis 2016 (LUNA16) challenge, promising results with an accuracy of 96.86%, a precision of 91.10%, a sensitivity of 90.78%, a specificity of 98.13%, and an AUC of 99.37% are achieved. Conclusion: Compared with other works, state-of-the-art specificity is obtained, which demonstrates that the proposed method is effective and applicable to lung nodule detection.


Author(s):  
Mazhar Basyouni Tayel ◽  
Azza Mahmoud Elbagoury

Background: Accurate segmentation of Breast Infrared Thermography is an important step for early detection of breast pathological changes. Automatic segmentation of Breast Infrared Thermography is a very challenging task, as it is difficult to find an accurate breast contour and extract regions of interest from it. Although several semi-automatic methods have been proposed for segmentation, their performance often depends on hand-crafted image features, as well as preprocessing operations. Objective: In this work, an approach to automatic semantic segmentation of the Breast Infrared Thermography is proposed based on end-to-end fully convolutional neural networks and without any pre or post-processing. Methods: The lack of labeled Breast Infrared Thermography data limits the complete utilization of fully convolutional neural networks. The proposed model overcomes this challenge by applying data augmentation and two-tier transfer learning from bigger datasets combined with adaptive multi-tier fine-tuning before training the fully convolutional neural networks model. Results: Experimental results show that the proposed approach achieves better segmentation results: 97.986% accuracy; 98.36% sensitivity and 97.61% specificity compared to hand-crafted segmentation methods. Conclusion: This work provided an end-to-end automatic semantic segmentation of Breast Infrared Thermography combined with fully convolutional networks, adaptive multi-tier fine-tuning and transfer learning. Also, this work was able to deal with challenges in applying convolutional neural networks on such data and achieving the state-of-the-art accuracy.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 3677-3680

Dog Breed identification is a specific application of Convolutional Neural Networks. Though the classification of Images by Convolutional Neural Network serves to be efficient method, still it has few drawbacks. Convolutional Neural Networks requires a large amount of images as training data and basic time for training the data and to achieve higher accuracy on the classification. To overcome this substantial time we use Transfer Learning. In computer vision, transfer learning refers to the use of a pre-trained models to train the CNN. By Transfer learning, a pre-trained model is trained to provide solution to classification problem which is similar to the classification problem we have. In this project we are using various pre-trained models like VGG16, Xception, InceptionV3 to train over 1400 images covering 120 breeds out of which 16 breeds of dogs were used as classes for training and obtain bottleneck features from these pre-trained models. Finally, Logistic Regression a multiclass classifier is used to identify the breed of the dog from the images and obtained 91%, 94%,95% validation accuracy for these different pre-trained models VGG16, Xception, InceptionV3.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.


Sign in / Sign up

Export Citation Format

Share Document