Classification of Breast Cancer Patients Using Neural Network Technique

2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Putri Marhida Badarudin ◽  
◽  
Rozaida Ghazali ◽  
Abdullah Alahdal ◽  
N.A.M. Alduais ◽  
...  

This work develops an Artificial Neural Network (ANN) model for performing Breast Cancer (BC) classification tasks. The design of the model considers studying different ANN architectures from the literature and chooses the one with the best performance. This ANN model aims to classify BC cases more systematically and more quickly. It provides facilities in the field of medicine to detect breast cancer among women. The ANN classification model is able to achieve an average accuracy of 98.88 % with an average run time of 0.182 seconds. Using this model, the classification of BC can be carried out much more faster than manual diagnosis and with good enough accuracy.

Author(s):  
Putri Marhida Badarudin ◽  
◽  
Rozaida Ghazali ◽  
Abdullah Alahdal ◽  
N.A.M. Alduais ◽  
...  

This work develops an Artificial Neural Network (ANN) model for performing Breast Cancer (BC) classification tasks. The design of the model considers studying different ANN architectures from the literature and chooses the one with the best performance. This ANN model aims to classify BC cases more systematically and more quickly. It provides facilities in the field of medicine to detect breast cancer among women. The ANN classification model is able to achieve an average accuracy of 98.88 % with an average run time of 0.182 seconds. Using this model, the classification of BC can be carried out much more faster than manual diagnosis and with good enough accuracy.


Jurnal Varian ◽  
2020 ◽  
Vol 3 (2) ◽  
pp. 95-102
Author(s):  
I Ketut Putu Suniantara ◽  
Gede Suwardika ◽  
Siti Soraya

Supervised learning in Machine learning is used to overcome classification problems with the Artificial Neural Network (ANN) approach. ANN has a few weaknesses in the operation and training process if the amount of data is large, resulting in poor classification accuracy. The results of the classification accuracy of Artificial Neural Networks will be better by using boosting. This study aims to develop a Boosting Feedforward Neural Network (FANN) classification model that can be implemented and used as a form of classification model that results in better accuracy, especially in the classification of the inaccuracy of Terbuka University students. The results showed the level of accuracy produced by the Feedforward Neural Network (FFNN) method had an accuracy rate of 72.93%. The application of boosting on FFN produces the best level of accuracy which is 74.44% at 500 iterations


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


2020 ◽  
Vol 14 ◽  
Author(s):  
Lahari Tipirneni ◽  
Rizwan Patan

Abstract:: Millions of deaths all over the world are caused by breast cancer every year. It has become the most common type of cancer in women. Early detection will help in better prognosis and increases the chance of survival. Automating the classification using Computer-Aided Diagnosis (CAD) systems can make the diagnosis less prone to errors. Multi class classification and Binary classification of breast cancer is a challenging problem. Convolutional neural network architectures extract specific feature descriptors from images, which cannot represent different types of breast cancer. This leads to false positives in classification, which is undesirable in disease diagnosis. The current paper presents an ensemble Convolutional neural network for multi class classification and Binary classification of breast cancer. The feature descriptors from each network are combined to produce the final classification. In this paper, histopathological images are taken from publicly available BreakHis dataset and classified between 8 classes. The proposed ensemble model can perform better when compared to the methods proposed in the literature. The results showed that the proposed model could be a viable approach for breast cancer classification.


Author(s):  
Shu-Farn Tey ◽  
Chung-Feng Liu ◽  
Tsair-Wei Chien ◽  
Chin-Wei Hsu ◽  
Kun-Chen Chan ◽  
...  

Unplanned patient readmission (UPRA) is frequent and costly in healthcare settings. No indicators during hospitalization have been suggested to clinicians as useful for identifying patients at high risk of UPRA. This study aimed to create a prediction model for the early detection of 14-day UPRA of patients with pneumonia. We downloaded the data of patients with pneumonia as the primary disease (e.g., ICD-10:J12*-J18*) at three hospitals in Taiwan from 2016 to 2018. A total of 21,892 cases (1208 (6%) for UPRA) were collected. Two models, namely, artificial neural network (ANN) and convolutional neural network (CNN), were compared using the training (n = 15,324; ≅70%) and test (n = 6568; ≅30%) sets to verify the model accuracy. An app was developed for the prediction and classification of UPRA. We observed that (i) the 17 feature variables extracted in this study yielded a high area under the receiver operating characteristic curve of 0.75 using the ANN model and that (ii) the ANN exhibited better AUC (0.73) than the CNN (0.50), and (iii) a ready and available app for predicting UHA was developed. The app could help clinicians predict UPRA of patients with pneumonia at an early stage and enable them to formulate preparedness plans near or after patient discharge from hospitalization.


Biology ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 517
Author(s):  
Shoko Kure ◽  
Shinya Iida ◽  
Marina Yamada ◽  
Hiroyuki Takei ◽  
Naoyuki Yamashita ◽  
...  

Background: Breast cancer is a leading cause of cancer death worldwide. Several studies have demonstrated that dogs can sniff and detect cancer in the breath or urine sample of a patient. This study aims to assess whether the urine sample can be used for breast cancer screening by its fingerprints of volatile organic compounds using a single trained sniffer dog. This is a preliminary study for developing the “electronic nose” for cancer screening. Methods: A nine-year-old female Labrador Retriever was trained to identify cancer from urine samples of breast cancer patients. Urine samples from patients histologically diagnosed with primary breast cancer, those with non-breast malignant diseases, and healthy volunteers were obtained, and a double-blind test was performed. Total of 40 patients with breast cancer, 142 patients with non-breast malignant diseases, and 18 healthy volunteers were enrolled, and their urine samples were collected. Results: In 40 times out of 40 runs of a double-blind test, the trained dog could correctly identify urine samples of breast cancer patients. Sensitivity and specificity of this breast cancer detection method using dog sniffing were both 100%. Conclusions: The trained dog in this study could accurately detect breast cancer from urine samples of breast cancer patients. These results indicate the feasibility of a method to detect breast cancer from urine samples using dog sniffing in the diagnosis of breast cancer. Although the methodological standardization is still an issue to be discussed, the current result warrants further study for developing a new breast cancer screening method based on volatile organic compounds in urine samples.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Abolghasem Daeichian ◽  
Rana Shahramfar ◽  
Elham Heidari

Abstract Lime is a significant material in many industrial processes, including steelmaking by blast furnace. Lime production through rotary kilns is a standard method in industries, yet it has depreciation, high energy consumption, and environmental pollution. A model of the lime production process can help to not only increase our knowledge and awareness but also can help reduce its disadvantages. This paper presents a black-box model by Artificial Neural Network (ANN) for the lime production process considering pre-heater, rotary kiln, and cooler parameters. To this end, actual data are collected from Zobahan Isfahan Steel Company, Iran, which consists of 746 data obtained in a duration of one year. The proposed model considers 23 input variables, predicting the amount of produced lime as an output variable. The ANN parameters such as number of hidden layers, number of neurons in each layer, activation functions, and training algorithm are optimized. Then, the sensitivity of the optimum model to the input variables is investigated. Top-three input variables are selected on the basis of one-group sensitivity analysis and their interactions are studied. Finally, an ANN model is developed considering the top-three most effective input variables. The mean square error of the proposed models with 23 and 3 inputs are equal to 0.000693 and 0.004061, respectively, which shows a high prediction capability of the two proposed models.


2021 ◽  
Vol 11 (9) ◽  
pp. 4292
Author(s):  
Mónica Y. Moreno-Revelo ◽  
Lorena Guachi-Guachi ◽  
Juan Bernardo Gómez-Mendoza ◽  
Javier Revelo-Fuelagán ◽  
Diego H. Peluffo-Ordóñez

Automatic crop identification and monitoring is a key element in enhancing food production processes as well as diminishing the related environmental impact. Although several efficient deep learning techniques have emerged in the field of multispectral imagery analysis, the crop classification problem still needs more accurate solutions. This work introduces a competitive methodology for crop classification from multispectral satellite imagery mainly using an enhanced 2D convolutional neural network (2D-CNN) designed at a smaller-scale architecture, as well as a novel post-processing step. The proposed methodology contains four steps: image stacking, patch extraction, classification model design (based on a 2D-CNN architecture), and post-processing. First, the images are stacked to increase the number of features. Second, the input images are split into patches and fed into the 2D-CNN model. Then, the 2D-CNN model is constructed within a small-scale framework, and properly trained to recognize 10 different types of crops. Finally, a post-processing step is performed in order to reduce the classification error caused by lower-spatial-resolution images. Experiments were carried over the so-named Campo Verde database, which consists of a set of satellite images captured by Landsat and Sentinel satellites from the municipality of Campo Verde, Brazil. In contrast to the maximum accuracy values reached by remarkable works reported in the literature (amounting to an overall accuracy of about 81%, a f1 score of 75.89%, and average accuracy of 73.35%), the proposed methodology achieves a competitive overall accuracy of 81.20%, a f1 score of 75.89%, and an average accuracy of 88.72% when classifying 10 different crops, while ensuring an adequate trade-off between the number of multiply-accumulate operations (MACs) and accuracy. Furthermore, given its ability to effectively classify patches from two image sequences, this methodology may result appealing for other real-world applications, such as the classification of urban materials.


Sign in / Sign up

Export Citation Format

Share Document