scholarly journals Automated color detection in orchids using color labels and deep learning

PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0259036
Author(s):  
Diah Harnoni Apriyanti ◽  
Luuk J. Spreeuwers ◽  
Peter J. F. Lucas ◽  
Raymond N. J. Veldhuis

The color of particular parts of a flower is often employed as one of the features to differentiate between flower types. Thus, color is also used in flower-image classification. Color labels, such as ‘green’, ‘red’, and ‘yellow’, are used by taxonomists and lay people alike to describe the color of plants. Flower image datasets usually only consist of images and do not contain flower descriptions. In this research, we have built a flower-image dataset, especially regarding orchid species, which consists of human-friendly textual descriptions of features of specific flowers, on the one hand, and digital photographs indicating how a flower looks like, on the other hand. Using this dataset, a new automated color detection model was developed. It is the first research of its kind using color labels and deep learning for color detection in flower recognition. As deep learning often excels in pattern recognition in digital images, we applied transfer learning with various amounts of unfreezing of layers with five different neural network architectures (VGG16, Inception, Resnet50, Xception, Nasnet) to determine which architecture and which scheme of transfer learning performs best. In addition, various color scheme scenarios were tested, including the use of primary and secondary color together, and, in addition, the effectiveness of dealing with multi-class classification using multi-class, combined binary, and, finally, ensemble classifiers were studied. The best overall performance was achieved by the ensemble classifier. The results show that the proposed method can detect the color of flower and labellum very well without having to perform image segmentation. The result of this study can act as a foundation for the development of an image-based plant recognition system that is able to offer an explanation of a provided classification.

Author(s):  
Fouzia Altaf ◽  
Syed M. S. Islam ◽  
Naeem Khalid Janjua

AbstractDeep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification.


2021 ◽  
Vol 56 (5) ◽  
pp. 241-252
Author(s):  
Shereen A. El-Aal ◽  
Neveen I. Ghali

Alzheimer's disease (AD) is an advanced and incurable neurodegenerative disease that causes progressive impairment of memory and cognitive functions due to the deterioration of brain cells. Early diagnosis is substantial to avoid permanent memory loss and develop treatments that will be subtracted in the future. Deep learning (DL) is a vital technique for medical imaging systems for AD diagnostics. The problem is multi-class classification seeking high accuracy. DL models have shown strong performance accuracy for multi-class prediction. In this paper, a proposed DL architecture is created to classify magnetic resonance imaging (MRI) to predict different stages of AD-based pre-trained Convolutional Neural Network (CNN) models and optimization algorithms. The proposed model architecture attempts to find the optimal subset of features to improve classification accuracy and reduce classification time. The pre-trained DL models, ResNet-101 and DenseNet-201, are utilized to extract features based on the last layer, and the Rival Genetic algorithm (RGA) and Pbest-Guide Binary Particle Swarm Optimization (PBPSO) are applied to select the optimal features. Then, the DL features and selected features are passed separately through created classifier for classification. The results are compared and analyzed by accuracy, performance metrics, and execution time. Experimental results showed that the most efficient accuracies were obtained by PBPSO selected features which reached 87.3% and 94.8% accuracy with less time of 46.7 sec, 32.7 sec for features based ResNet-101 and DenseNet-201, receptively.


Author(s):  
Pritam Ghosh ◽  
Subhranil Mustafi ◽  
Satyendra Nath Mandal

In this paper an attempt has been made to identify six different goat breeds from pure breed goat images. The images of goat breeds have been captured from different organized registered goat farms in India, and almost two thousand digital images of individual goats were captured in restricted (to get similar image background) and unrestricted (natural) environments without imposing stress to animals. A pre-trained deep learning-based object detection model called Faster R-CNN has been fine-tuned by using transfer-learning on the acquired images for automatic classification and localization of goat breeds. This fine-tuned model is able to locate the goat (localize) and classify (identify) its breed in the image. The Pascal VOC object detection evaluation metrics have been used to evaluate this model. Finally, comparison has been made with prediction accuracies of different technologies used for different animal breed identification.


Author(s):  
N. Durga Indira ◽  
M. Venu Gopala Rao

In automotive vehicles, radar is the one of the component for autonomous driving, used for target detection and long-range sensing. Whereas interference exists in signals, noise increases and it effects severely while detecting target objects. For these reasons, various interference mitigation techniques are implemented in this paper. By using these mitigation techniques interference and noise are reduced and original signals are reconstructed. In this paper, we proposed a method to mitigate interference in signal using deep learning. The proposed method provides the best and accurate performance in relate to the various interference conditions and gives better accuracy compared with other existing methods.


2021 ◽  
Vol 11 (16) ◽  
pp. 7188
Author(s):  
Tieming Chen ◽  
Yunpeng Chen ◽  
Mingqi Lv ◽  
Gongxun He ◽  
Tiantian Zhu ◽  
...  

Malicious HTTP traffic detection plays an important role in web application security. Most existing work applies machine learning and deep learning techniques to build the malicious HTTP traffic detection model. However, they still suffer from the problems of huge training data collection cost and low cross-dataset generalization ability. Aiming at these problems, this paper proposes DeepPTSD, a deep learning method for payload based malicious HTTP traffic detection. First, it treats the malicious HTTP traffic detection as a text classification problem and trains the initial detection model using TextCNN on a public dataset, and then adapts the initial detection model to the target dataset based on a transfer learning algorithm. Second, in the transfer learning procedure, it uses a semi-supervised learning algorithm to accomplish the model adaptation task. The semi-supervised learning algorithm enhances the target dataset based on a HTTP payload data augmentation mechanism to exploit both the labeled and unlabeled data. We evaluate DeepPTSD on two real HTTP traffic datasets. The results show that DeepPTSD has competitive performance under the small data condition.


2021 ◽  
Author(s):  
Lidia Cleetus ◽  
Raji Sukumar ◽  
Hemalatha N

In this paper, a detection tool has been built for the detection and identification of the diseases and pests found in the crops at its earliest stage. For this, various deep learning architectures were experimented to see which one of those would help in building a more accurate and an efficient detection model. The deep learning architectures used in this study were Convolutional Neural Network, VGG16, InceptionV3, and Xception. VGG16, InceptionV3, and Xception are categorized as the pre-trained models based on CNN architecture. They follow the concept of transfer learning. Transfer learning is a technique which makes use of the learnings of the models previously trained on a base data and applies it to the present dataset. This is an efficient technique which gives us rapid results and improved performance. Two plant datasets have been used here for disease and insects. The results of the algorithms were then compared. Most successful one has been the Xception model which obtained 82.89 for disease and 77.9 for pests.


2021 ◽  
Vol 5 (4) ◽  
pp. 37-53
Author(s):  
Zurana Mehrin Ruhi ◽  
Sigma Jahan ◽  
Jia Uddin

In the fourth industrial revolution, data-driven intelligent fault diagnosis for industrial purposes serves a crucial role. In contemporary times, although deep learning is a popular approach for fault diagnosis, it requires massive amounts of labelled samples for training, which is arduous to come by in the real world. Our contribution to introduce a novel comprehensive intelligent fault detection model using the Case Western Reserve University dataset is divided into two steps. Firstly, a new hybrid signal decomposition methodology is developed comprising Empirical Mode Decomposition and Variational Mode Decomposition to leverage signal information from both processes for effective feature extraction. Secondly, transfer learning with DenseNet121 is employed to alleviate the constraints of deep learning models. Finally, our proposed novel technique surpassed not only previous outcomes but also generated state-of-the-art outcomes represented via the F1 score.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dandi Yang ◽  
Cristhian Martinez ◽  
Lara Visuña ◽  
Hardev Khandhar ◽  
Chintan Bhatt ◽  
...  

AbstractThe main purpose of this work is to investigate and compare several deep learning enhanced techniques applied to X-ray and CT-scan medical images for the detection of COVID-19. In this paper, we used four powerful pre-trained CNN models, VGG16, DenseNet121, ResNet50,and ResNet152, for the COVID-19 CT-scan binary classification task. The proposed Fast.AI ResNet framework was designed to find out the best architecture, pre-processing, and training parameters for the models largely automatically. The accuracy and F1-score were both above 96% in the diagnosis of COVID-19 using CT-scan images. In addition, we applied transfer learning techniques to overcome the insufficient data and to improve the training time. The binary and multi-class classification of X-ray images tasks were performed by utilizing enhanced VGG16 deep transfer learning architecture. High accuracy of 99% was achieved by enhanced VGG16 in the detection of X-ray images from COVID-19 and pneumonia. The accuracy and validity of the algorithms were assessed on X-ray and CT-scan well-known public datasets. The proposed methods have better results for COVID-19 diagnosis than other related in literature. In our opinion, our work can help virologists and radiologists to make a better and faster diagnosis in the struggle against the outbreak of COVID-19.


2020 ◽  
Vol 12 (2) ◽  
pp. 245 ◽  
Author(s):  
J. Senthilnath ◽  
Neelanshi Varia ◽  
Akanksha Dokania ◽  
Gaotham Anand ◽  
Jón Atli Benediktsson

Unmanned aerial vehicle (UAV) remote sensing has a wide area of applications and in this paper, we attempt to address one such problem—road extraction from UAV-captured RGB images. The key challenge here is to solve the road extraction problem using the UAV multiple remote sensing scene datasets that are acquired with different sensors over different locations. We aim to extract the knowledge from a dataset that is available in the literature and apply this extracted knowledge on our dataset. The paper focuses on a novel method which consists of deep TEC (deep transfer learning with ensemble classifier) for road extraction using UAV imagery. The proposed deep TEC performs road extraction on UAV imagery in two stages, namely, deep transfer learning and ensemble classifier. In the first stage, with the help of deep learning methods, namely, the conditional generative adversarial network, the cycle generative adversarial network and the fully convolutional network, the model is pre-trained on the benchmark UAV road extraction dataset that is available in the literature. With this extracted knowledge (based on the pre-trained model) the road regions are then extracted on our UAV acquired images. Finally, for the road classified images, ensemble classification is carried out. In particular, the deep TEC method has an average quality of 71%, which is 10% higher than the next best standard deep learning methods. Deep TEC also shows a higher level of performance measures such as completeness, correctness and F1 score measures. Therefore, the obtained results show that the deep TEC is efficient in extracting road networks in an urban region.


Sign in / Sign up

Export Citation Format

Share Document