scholarly journals Predicting Invasive Ductal Carcinoma Tissues in Whole Slide Images of Breast Cancer by Using Convolutional Neural Network Model and Multiple Classifiers in Google Colab

Author(s):  
Deepa B G ◽  
S. Senthil

Abstract Breast Cancer (BC) is the common type of cancer found in women which is caused due to the abnormal growth of cells in the breast. An early BC detection helps to increase the survival rate of the patient and 80% BC type was Invasive Ductal Carcinoma (IDC) .In this work, a deep learning-based IDC prediction model is proposed with multiple classifiers and CNN (Convolutional Neural Network). The developed deep learning method used a sequential Keras model like conv2D, Maxpooling2D, Dropout, Flatten and Dense. The multiple classifiers are LR (Logistic Regression), RF (Random Forest), K-NN (K-Nearest Neighbors), SVM (Support Vector Machine), Linear SVC, GNB (Gaussian NB) and DT (Decision Tree). The CNN model generated by using SkLearn, Keras and Tensor flow libraries, and results are organized by MatPlot libraries. At the classification stage, a helper function was defined, and Google Colab online browser platform used for developing the proposed model. The performance is analysed in terms of Accuracy, Precision, Recall, F1-score and Support.

2019 ◽  
Vol 11 (2) ◽  
pp. 43
Author(s):  
Samuel Aji Sena ◽  
Panca Mudjirahardjo ◽  
Sholeh Hadi Pramono

This research presents a breast cancer detection system using deep learning method. Breast cancer detection in a large slide of biopsy image is a hard task because it needs manual observation by a pathologist to find the malignant region. The deep learning model used in this research is made up of multiple layers of the residual convolutional neural network, and instead of using another type of classifier, a multilayer neural network was used as the classifier and stacked together and trained using end-to-end training approach. The system is trained using invasive ductal carcinoma dataset from the Hospital of the University of Pennsylvania and The Cancer Institute of New Jersey. From this dataset, 80% and 20% were randomly sampled and used as training and testing data respectively. Training a neural network on an imbalanced dataset is quite challenging. Weighted loss function was used as the objective function to tackle this problem. We achieve 78.26% and 78.03% for Recall and F1-Score metrics, respectively which are an improvement compared to the previous approach.


2020 ◽  
Vol 34 (5) ◽  
pp. 601-606
Author(s):  
Tulasi Krishna Sajja ◽  
Hemantha Kumar Kalluri

Heart disease is a very deadly disease. Worldwide, the majority of people are suffering from this problem. Many Machine Learning (ML) approaches are not sufficient to forecast the disease caused by the virus. Therefore, there is a need for one system that predicts disease efficiently. The Deep Learning approach predicts the disease caused by the blocked heart. This paper proposes a Convolutional Neural Network (CNN) to predict the disease at an early stage. This paper focuses on a comparison between the traditional approaches such as Logistic Regression, K-Nearest Neighbors (KNN), Naïve Bayes (NB), Support Vector Machine (SVM), Neural Networks (NN), and the proposed prediction model of CNN. The UCI machine learning repository dataset for experimentation and Cardiovascular Disease (CVD) predictions with 94% accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 742
Author(s):  
Canh Nguyen ◽  
Vasit Sagan ◽  
Matthew Maimaitiyiming ◽  
Maitiniyazi Maimaitijiang ◽  
Sourav Bhadra ◽  
...  

Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, −92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400–1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial–spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900–940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400–700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.


2022 ◽  
pp. 1-12
Author(s):  
Amin Ul Haq ◽  
Jian Ping Li ◽  
Samad Wali ◽  
Sultan Ahmad ◽  
Zafar Ali ◽  
...  

Artificial intelligence (AI) based computer-aided diagnostic (CAD) systems can effectively diagnose critical disease. AI-based detection of breast cancer (BC) through images data is more efficient and accurate than professional radiologists. However, the existing AI-based BC diagnosis methods have complexity in low prediction accuracy and high computation time. Due to these reasons, medical professionals are not employing the current proposed techniques in E-Healthcare to effectively diagnose the BC. To diagnose the breast cancer effectively need to incorporate advanced AI techniques based methods in diagnosis process. In this work, we proposed a deep learning based diagnosis method (StackBC) to detect breast cancer in the early stage for effective treatment and recovery. In particular, we have incorporated deep learning models including Convolutional neural network (CNN), Long short term memory (LSTM), and Gated recurrent unit (GRU) for the classification of Invasive Ductal Carcinoma (IDC). Additionally, data augmentation and transfer learning techniques have been incorporated for data set balancing and for effective training the model. To further improve the predictive performance of model we used stacking technique. Among the three base classifiers (CNN, LSTM, GRU) the predictive performance of GRU are better as compared to individual model. The GRU is selected as a meta classifier to distinguish between Non-IDC and IDC breast images. The method Hold-Out has been incorporated and the data set is split into 90% and 10% for training and testing of the model, respectively. Model evaluation metrics have been computed for model performance evaluation. To analyze the efficacy of the model, we have used breast histology images data set. Our experimental results demonstrated that the proposed StackBC method achieved improved performance by gaining 99.02% accuracy and 100% area under the receiver operating characteristics curve (AUC-ROC) compared to state-of-the-art methods. Due to the high performance of the proposed method, we recommend it for early recognition of breast cancer in E-Healthcare.


2021 ◽  
Vol 16 ◽  
Author(s):  
Farida Alaaeldin Mostafa ◽  
Yasmine Mohamed Afify ◽  
Rasha Mohamed Ismail ◽  
Nagwa Lotfy Badr

Background: Protein sequence analysis helps in the prediction of protein functions. As the number of proteins increases, it gives the bioinformaticians a challenge to analyze and study the similarity between them. Most of the existing protein analysis methods use Support Vector Machine. Deep learning did not receive much attention regarding protein analysis as it is noted that little work focused on studying the protein diseases classification. Objective: The contribution of this paper is to present a deep learning approach that classifies protein diseases based on protein descriptors. Methods: Different protein descriptors are used and decomposed into modified feature descriptors. Uniquely, we introduce using Convolutional Neural Network model to learn and classify protein diseases. The modified feature descriptors are fed to the Convolutional Neural Network model on a dataset of 1563 protein sequences classified into 3 different disease classes: Aids, Tumor suppressor, and Proto oncogene. Results: The usage of the modified feature descriptors shows a significant increase in the performance of the Convolutional Neural Network model over Support Vector Machine using different kernel functions. One modified feature descriptor improved by 19.8%, 27.9%, 17.6%, 21.5%, 17.3%, and 22% for evaluation metrics: Area Under the Curve, Matthews Correlation Coefficient, Accuracy, F1-score, Recall, and Precision, respectively. Conclusion: Results show that the prediction of the proposed modified feature descriptors significantly surpasses that of Support Vector Machine model.


2018 ◽  
Vol 7 (11) ◽  
pp. 418 ◽  
Author(s):  
Tian Jiang ◽  
Xiangnan Liu ◽  
Ling Wu

Accurate and timely information about rice planting areas is essential for crop yield estimation, global climate change and agricultural resource management. In this study, we present a novel pixel-level classification approach that uses convolutional neural network (CNN) model to extract the features of enhanced vegetation index (EVI) time series curve for classification. The goal is to explore the practicability of deep learning techniques for rice recognition in complex landscape regions, where rice is easily confused with the surroundings, by using mid-resolution remote sensing images. A transfer learning strategy is utilized to fine tune a pre-trained CNN model and obtain the temporal features of the EVI curve. Support vector machine (SVM), a traditional machine learning approach, is also implemented in the experiment. Finally, we evaluate the accuracy of the two models. Results show that our model performs better than SVM, with the overall accuracies being 93.60% and 91.05%, respectively. Therefore, this technique is appropriate for estimating rice planting areas in southern China on the basis of a pre-trained CNN model by using time series data. And more opportunity and potential can be found for crop classification by remote sensing and deep learning technique in the future study.


2021 ◽  
Vol 9 ◽  
Author(s):  
Ashwini K ◽  
P. M. Durai Raj Vincent ◽  
Kathiravan Srinivasan ◽  
Chuan-Yu Chang

Neonatal infants communicate with us through cries. The infant cry signals have distinct patterns depending on the purpose of the cries. Preprocessing, feature extraction, and feature selection need expert attention and take much effort in audio signals in recent days. In deep learning techniques, it automatically extracts and selects the most important features. For this, it requires an enormous amount of data for effective classification. This work mainly discriminates the neonatal cries into pain, hunger, and sleepiness. The neonatal cry auditory signals are transformed into a spectrogram image by utilizing the short-time Fourier transform (STFT) technique. The deep convolutional neural network (DCNN) technique takes the spectrogram images for input. The features are obtained from the convolutional neural network and are passed to the support vector machine (SVM) classifier. Machine learning technique classifies neonatal cries. This work combines the advantages of machine learning and deep learning techniques to get the best results even with a moderate number of data samples. The experimental result shows that CNN-based feature extraction and SVM classifier provides promising results. While comparing the SVM-based kernel techniques, namely radial basis function (RBF), linear and polynomial, it is found that SVM-RBF provides the highest accuracy of kernel-based infant cry classification system provides 88.89% accuracy.


2021 ◽  
Vol 8 (2) ◽  
pp. 311
Author(s):  
Mohammad Farid Naufal

<p class="Abstrak">Cuaca merupakan faktor penting yang dipertimbangkan untuk berbagai pengambilan keputusan. Klasifikasi cuaca manual oleh manusia membutuhkan waktu yang lama dan inkonsistensi. <em>Computer vision</em> adalah cabang ilmu yang digunakan komputer untuk mengenali atau melakukan klasifikasi citra. Hal ini dapat membantu pengembangan <em>self autonomous machine</em> agar tidak bergantung pada koneksi internet dan dapat melakukan kalkulasi sendiri secara <em>real time</em>. Terdapat beberapa algoritma klasifikasi citra populer yaitu K-Nearest Neighbors (KNN), Support Vector Machine (SVM), dan Convolutional Neural Network (CNN). KNN dan SVM merupakan algoritma klasifikasi dari <em>Machine Learning</em> sedangkan CNN merupakan algoritma klasifikasi dari Deep Neural Network. Penelitian ini bertujuan untuk membandingkan performa dari tiga algoritma tersebut sehingga diketahui berapa gap performa diantara ketiganya. Arsitektur uji coba yang dilakukan adalah menggunakan 5 cross validation. Beberapa parameter digunakan untuk mengkonfigurasikan algoritma KNN, SVM, dan CNN. Dari hasil uji coba yang dilakukan CNN memiliki performa terbaik dengan akurasi 0.942, precision 0.943, recall 0.942, dan F1 Score 0.942.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Weather is an important factor that is considered for various decision making. Manual weather classification by humans is time consuming and inconsistent. Computer vision is a branch of science that computers use to recognize or classify images. This can help develop self-autonomous machines so that they are not dependent on an internet connection and can perform their own calculations in real time. There are several popular image classification algorithms, namely K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Convolutional Neural Network (CNN). KNN and SVM are Machine Learning classification algorithms, while CNN is a Deep Neural Networks classification algorithm. This study aims to compare the performance of that three algorithms so that the performance gap between the three is known. The test architecture is using 5 cross validation. Several parameters are used to configure the KNN, SVM, and CNN algorithms. From the test results conducted by CNN, it has the best performance with 0.942 accuracy, 0.943 precision, 0.942 recall, and F1 Score 0.942.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>


2021 ◽  
Vol 7 ◽  
pp. e766
Author(s):  
Ammar Amjad ◽  
Lal Khan ◽  
Hsien-Tsung Chang

Speech emotion recognition (SER) is a challenging issue because it is not clear which features are effective for classification. Emotionally related features are always extracted from speech signals for emotional classification. Handcrafted features are mainly used for emotional identification from audio signals. However, these features are not sufficient to correctly identify the emotional state of the speaker. The advantages of a deep convolutional neural network (DCNN) are investigated in the proposed work. A pretrained framework is used to extract the features from speech emotion databases. In this work, we adopt the feature selection (FS) approach to find the discriminative and most important features for SER. Many algorithms are used for the emotion classification problem. We use the random forest (RF), decision tree (DT), support vector machine (SVM), multilayer perceptron classifier (MLP), and k-nearest neighbors (KNN) to classify seven emotions. All experiments are performed by utilizing four different publicly accessible databases. Our method obtains accuracies of 92.02%, 88.77%, 93.61%, and 77.23% for Emo-DB, SAVEE, RAVDESS, and IEMOCAP, respectively, for speaker-dependent (SD) recognition with the feature selection method. Furthermore, compared to current handcrafted feature-based SER methods, the proposed method shows the best results for speaker-independent SER. For EMO-DB, all classifiers attain an accuracy of more than 80% with or without the feature selection technique.


Sign in / Sign up

Export Citation Format

Share Document