scholarly journals Classification of Watermelon Seeds Using Morphological Patterns of X-ray Imaging: A Comparison of Conventional Machine Learning and Deep Learning

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6753
Author(s):  
Mohammed Raju Ahmed ◽  
Jannat Yasmin ◽  
Eunsung Park ◽  
Geonwoo Kim ◽  
Moon S. Kim ◽  
...  

In this study, conventional machine learning and deep leaning approaches were evaluated using X-ray imaging techniques for investigating the internal parameters (endosperm and air space) of three cultivars of watermelon seed. In the conventional machine learning, six types of image features were extracted after applying different types of image preprocessing, such as image intensity and contrast enhancement, and noise reduction. The sequential forward selection (SFS) method and Fisher objective function were used as the search strategy and feature optimization. Three classifiers were tested (linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and k-nearest neighbors algorithm (KNN)) to find the best performer. On the other hand, in the transfer learning (deep learning) approaches, simple ConvNet, AlexNet, VGG-19, ResNet-50, and ResNet-101 were used to train the dataset and class prediction of the seed. For the supervised model development (both conventional machine learning and deep learning), the germination test results of the samples were used where the seeds were divided into two classes: (1) normal viable seeds and (2) nonviable and abnormal viable seeds. In the conventional classification, 83.6% accuracy was obtained by LDA using 48 features. ResNet-50 performed better than other transfer learning architectures, with an 87.3% accuracy which was the highest accuracy in all classification models. The findings of this study manifested that transfer learning is a constructive strategy for classifying seeds by analyzing their morphology, where X-ray imaging can be adopted as a potential imaging technique.

2021 ◽  
pp. 1-12
Author(s):  
Mukul Kumar ◽  
Nipun Katyal ◽  
Nersisson Ruban ◽  
Elena Lyakso ◽  
A. Mary Mekala ◽  
...  

Over the years the need for differentiating various emotions from oral communication plays an important role in emotion based studies. There have been different algorithms to classify the kinds of emotion. Although there is no measure of fidelity of the emotion under consideration, which is primarily due to the reason that most of the readily available datasets that are annotated are produced by actors and not generated in real-world scenarios. Therefore, the predicted emotion lacks an important aspect called authenticity, which is whether an emotion is actual or stimulated. In this research work, we have developed a transfer learning and style transfer based hybrid convolutional neural network algorithm to classify the emotion as well as the fidelity of the emotion. The model is trained on features extracted from a dataset that contains stimulated as well as actual utterances. We have compared the developed algorithm with conventional machine learning and deep learning techniques by few metrics like accuracy, Precision, Recall and F1 score. The developed model performs much better than the conventional machine learning and deep learning models. The research aims to dive deeper into human emotion and make a model that understands it like humans do with precision, recall, F1 score values of 0.994, 0.996, 0.995 for speech authenticity and 0.992, 0.989, 0.99 for speech emotion classification respectively.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Mundher Mohammed Taresh ◽  
Ningbo Zhu ◽  
Talal Ahmed Ali Ali ◽  
Asaad Shakir Hameed ◽  
Modhi Lafta Mutar

The novel coronavirus disease 2019 (COVID-19) is a contagious disease that has caused thousands of deaths and infected millions worldwide. Thus, various technologies that allow for the fast detection of COVID-19 infections with high accuracy can offer healthcare professionals much-needed help. This study is aimed at evaluating the effectiveness of the state-of-the-art pretrained Convolutional Neural Networks (CNNs) on the automatic diagnosis of COVID-19 from chest X-rays (CXRs). The dataset used in the experiments consists of 1200 CXR images from individuals with COVID-19, 1345 CXR images from individuals with viral pneumonia, and 1341 CXR images from healthy individuals. In this paper, the effectiveness of artificial intelligence (AI) in the rapid and precise identification of COVID-19 from CXR images has been explored based on different pretrained deep learning algorithms and fine-tuned to maximise detection accuracy to identify the best algorithms. The results showed that deep learning with X-ray imaging is useful in collecting critical biological markers associated with COVID-19 infections. VGG16 and MobileNet obtained the highest accuracy of 98.28%. However, VGG16 outperformed all other models in COVID-19 detection with an accuracy, F1 score, precision, specificity, and sensitivity of 98.72%, 97.59%, 96.43%, 98.70%, and 98.78%, respectively. The outstanding performance of these pretrained models can significantly improve the speed and accuracy of COVID-19 diagnosis. However, a larger dataset of COVID-19 X-ray images is required for a more accurate and reliable identification of COVID-19 infections when using deep transfer learning. This would be extremely beneficial in this pandemic when the disease burden and the need for preventive measures are in conflict with the currently available resources.


2020 ◽  
Author(s):  
Sarath Pathari ◽  
Rahul U

In this study, a dataset of X-ray images from patients with common viral pneumonia, bacterial pneumonia, confirmed Covid-19 disease was utilized for the automatic detection of the Coronavirus disease. The point of the investigation is to assess the exhibition of cutting edge convolutional neural system structures proposed over the ongoing years for clinical picture order. In particular, the system called Transfer Learning was received. With transfer learning, the location of different variations from the norm in little clinical picture datasets is a reachable objective, regularly yielding amazing outcomes. The datasets used in this trial. Firstly, a collection of 24000 X-ray images includes 6000 images for confirmed Covid-19 disease,6000 confirmed common bacterial pneumonia and 6000 images of normal conditions. The information was gathered and expanded from the accessible X-Ray pictures on open clinical stores. The outcomes recommend that Deep Learning with X-Ray imaging may separate noteworthy biomarkers identified with the Covid-19 sickness, while the best precision, affectability, and particularity acquired is 97.83%, 96.81%, and 98.56% individually.


Plant Methods ◽  
2019 ◽  
Vol 15 (1) ◽  
Author(s):  
Niels J. F. De Baerdemaeker ◽  
Michiel Stock ◽  
Jan Van den Bulcke ◽  
Bernard De Baets ◽  
Luc Van Hoorebeke ◽  
...  

Abstract Background Acoustic emission (AE) sensing is in use since the late 1960s in drought-induced embolism research as a non-invasive and continuous method. It is very well suited to assess a plant’s vulnerability to dehydration. Over the last couple of years, AE sensing has further improved due to progress in AE sensors, data acquisition methods and analysis systems. Despite these recent advances, it is still challenging to detect drought-induced embolism events in the AE sources registered by the sensors during dehydration, which sometimes questions the quantitative potential of AE sensing. Results In quest of a method to separate embolism-related AE signals from other dehydration-related signals, a 2-year-old potted Fraxinus excelsior L. tree was subjected to a drought experiment. Embolism formation was acoustically measured with two broadband point-contact AE sensors while simultaneously being visualized by X-ray computed microtomography (µCT). A machine learning method was used to link visually detected embolism formation by µCT with corresponding AE signals. Specifically, applying linear discriminant analysis (LDA) on the six AE waveform parameters amplitude, counts, duration, signal strength, absolute energy and partial power in the range 100–200 kHz resulted in an embolism-related acoustic vulnerability curve (VCAE-E) better resembling the standard µCT VC (VCCT), both in time and in absolute number of embolized vessels. Interestingly, the unfiltered acoustic vulnerability curve (VCAE) also closely resembled VCCT, indicating that VCs constructed from all registered AE signals did not compromise the quantitative interpretation of the species’ vulnerability to drought-induced embolism formation. Conclusion Although machine learning could detect similar numbers of embolism-related AE as µCT, there still is insufficient model-based evidence to conclusively attribute these signals to embolism events. Future research should therefore focus on similar experiments with more in-depth analysis of acoustic waveforms, as well as explore the possibility of Fast Fourier transformation (FFT) to remove non-embolism-related AE signals.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4319 ◽  
Author(s):  
André Dantas de Medeiros ◽  
Laércio Junio da Silva ◽  
João Paulo Oliveira Ribeiro ◽  
Kamylla Calzolari Ferreira ◽  
Jorge Tadeu Fim Rosas ◽  
...  

Optical sensors combined with machine learning algorithms have led to significant advances in seed science. These advances have facilitated the development of robust approaches, providing decision-making support in the seed industry related to the marketing of seed lots. In this study, a novel approach for seed quality classification is presented. We developed classifier models using Fourier transform near-infrared (FT-NIR) spectroscopy and X-ray imaging techniques to predict seed germination and vigor. A forage grass (Urochloa brizantha) was used as a model species. FT-NIR spectroscopy data and radiographic images were obtained from individual seeds, and the models were created based on the following algorithms: linear discriminant analysis (LDA), partial least squares discriminant analysis (PLS-DA), random forest (RF), naive Bayes (NB), and support vector machine with radial basis (SVM-r) kernel. In the germination prediction, the models individually reached an accuracy of 82% using FT-NIR data, and 90% using X-ray data. For seed vigor, the models achieved 61% and 68% accuracy using FT-NIR and X-ray data, respectively. Combining the FT-NIR and X-ray data, the performance of the classification model reached an accuracy of 85% to predict germination, and 62% for seed vigor. Overall, the models developed using both NIR spectra and X-ray imaging data in machine learning algorithms are efficient in quickly, non-destructively, and accurately identifying the capacity of seed to germinate. The use of X-ray data and the LDA algorithm showed great potential to be used as a viable alternative to assist in the quality classification of U. brizantha seeds.


2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4736
Author(s):  
Sk. Tanzir Mehedi ◽  
Adnan Anwar ◽  
Ziaur Rahman ◽  
Kawsar Ahmed

The Controller Area Network (CAN) bus works as an important protocol in the real-time In-Vehicle Network (IVN) systems for its simple, suitable, and robust architecture. The risk of IVN devices has still been insecure and vulnerable due to the complex data-intensive architectures which greatly increase the accessibility to unauthorized networks and the possibility of various types of cyberattacks. Therefore, the detection of cyberattacks in IVN devices has become a growing interest. With the rapid development of IVNs and evolving threat types, the traditional machine learning-based IDS has to update to cope with the security requirements of the current environment. Nowadays, the progression of deep learning, deep transfer learning, and its impactful outcome in several areas has guided as an effective solution for network intrusion detection. This manuscript proposes a deep transfer learning-based IDS model for IVN along with improved performance in comparison to several other existing models. The unique contributions include effective attribute selection which is best suited to identify malicious CAN messages and accurately detect the normal and abnormal activities, designing a deep transfer learning-based LeNet model, and evaluating considering real-world data. To this end, an extensive experimental performance evaluation has been conducted. The architecture along with empirical analyses shows that the proposed IDS greatly improves the detection accuracy over the mainstream machine learning, deep learning, and benchmark deep transfer learning models and has demonstrated better performance for real-time IVN security.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


2021 ◽  
Vol 11 (9) ◽  
pp. 4233
Author(s):  
Biprodip Pal ◽  
Debashis Gupta ◽  
Md. Rashed-Al-Mahfuz ◽  
Salem A. Alyami ◽  
Mohammad Ali Moni

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.


2021 ◽  
Vol 29 (1) ◽  
pp. 19-36
Author(s):  
Çağín Polat ◽  
Onur Karaman ◽  
Ceren Karaman ◽  
Güney Korkmaz ◽  
Mehmet Can Balcı ◽  
...  

BACKGROUND: Chest X-ray imaging has been proved as a powerful diagnostic method to detect and diagnose COVID-19 cases due to its easy accessibility, lower cost and rapid imaging time. OBJECTIVE: This study aims to improve efficacy of screening COVID-19 infected patients using chest X-ray images with the help of a developed deep convolutional neural network model (CNN) entitled nCoV-NET. METHODS: To train and to evaluate the performance of the developed model, three datasets were collected from resources of “ChestX-ray14”, “COVID-19 image data collection”, and “Chest X-ray collection from Indiana University,” respectively. Overall, 299 COVID-19 pneumonia cases and 1,522 non-COVID 19 cases are involved in this study. To overcome the probable bias due to the unbalanced cases in two classes of the datasets, ResNet, DenseNet, and VGG architectures were re-trained in the fine-tuning stage of the process to distinguish COVID-19 classes using a transfer learning method. Lastly, the optimized final nCoV-NET model was applied to the testing dataset to verify the performance of the proposed model. RESULTS: Although the performance parameters of all re-trained architectures were determined close to each other, the final nCOV-NET model optimized by using DenseNet-161 architecture in the transfer learning stage exhibits the highest performance for classification of COVID-19 cases with the accuracy of 97.1 %. The Activation Mapping method was used to create activation maps that highlights the crucial areas of the radiograph to improve causality and intelligibility. CONCLUSION: This study demonstrated that the proposed CNN model called nCoV-NET can be utilized for reliably detecting COVID-19 cases using chest X-ray images to accelerate the triaging and save critical time for disease control as well as assisting the radiologist to validate their initial diagnosis.


Sign in / Sign up

Export Citation Format

Share Document