scholarly journals Breast Cancer Histopathological Image Classification using Deep Convolutional Neural Network

2020 ◽  
Author(s):  
vishal mellahalli siddegowda

Deep learning has come up with the intense class of models which have potential applications in the field of image classification, video recognition, object recognition, natural language Processing and speech recognition. Mainly, Deep convolutional Neural Network is one of the deep learning models that is used for image classification, that extracts the feature from the images and use these extracted features to classify images (2D or 3D images). In this paper, DCNN is used to classify mammogram images obtained from medical imaging process to detect the benign and malignant cells. The outcome of the study is to bring out the idea behind computing techniques incorporated with medical diagnostics, helping medical professional to take advantage of computer aided diagnostics, ultimately improving the time spent by pathologist to inspect the stained tissues in-turn increasing the survival rates.

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
ZhiFei Lai ◽  
HuiFang Deng

Medical image classification is a key technique of Computer-Aided Diagnosis (CAD) systems. Traditional methods rely mainly on the shape, color, and/or texture features as well as their combinations, most of which are problem-specific and have shown to be complementary in medical images, which leads to a system that lacks the ability to make representations of high-level problem domain concepts and that has poor model generalization ability. Recent deep learning methods provide an effective way to construct an end-to-end model that can compute final classification labels with the raw pixels of medical images. However, due to the high resolution of the medical images and the small dataset size, deep learning models suffer from high computational costs and limitations in the model layers and channels. To solve these problems, in this paper, we propose a deep learning model that integrates Coding Network with Multilayer Perceptron (CNMP), which combines high-level features that are extracted from a deep convolutional neural network and some selected traditional features. The construction of the proposed model includes the following steps. First, we train a deep convolutional neural network as a coding network in a supervised manner, and the result is that it can code the raw pixels of medical images into feature vectors that represent high-level concepts for classification. Second, we extract a set of selected traditional features based on background knowledge of medical images. Finally, we design an efficient model that is based on neural networks to fuse the different feature groups obtained in the first and second step. We evaluate the proposed approach on two benchmark medical image datasets: HIS2828 and ISIC2017. We achieve an overall classification accuracy of 90.1% and 90.2%, respectively, which are higher than the current successful methods.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 652 ◽  
Author(s):  
Carlo Augusto Mallio ◽  
Andrea Napolitano ◽  
Gennaro Castiello ◽  
Francesco Maria Giordano ◽  
Pasquale D'Alessio ◽  
...  

Background: Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis. Methods: We enrolled three groups: a pneumonia-free group (n = 30), a COVID-19 group (n = 34), and a group of patients with ICI therapy-related pneumonitis (n = 21). Computed tomography images were analyzed with an artificial intelligence (AI) algorithm based on a deep convolutional neural network structure. Statistical analysis included the Mann–Whitney U test (significance threshold at p < 0.05) and the receiver operating characteristic curve (ROC curve). Results: The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97). Conclusions: The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology.


2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


2021 ◽  
Author(s):  
Wenjie Cao ◽  
Cheng Zhang ◽  
Zhenzhen Xiong ◽  
Ting Wang ◽  
Junchao Chen ◽  
...  

2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Brain tumor is a severe cancer disease caused by uncontrollable and abnormal partitioning of cells. Timely disease detection and treatment plans lead to the increased life expectancy of patients. Automated detection and classification of brain tumor are a more challenging process which is based on the clinician’s knowledge and experience. For this fact, one of the most practical and important techniques is to use deep learning. Recent progress in the fields of deep learning has helped the clinician’s in medical imaging for medical diagnosis of brain tumor. In this paper, we present a comparison of Deep Convolutional Neural Network models for automatically binary classification query MRI images dataset with the goal of taking precision tools to health professionals based on fined recent versions of DenseNet, Xception, NASNet-A, and VGGNet. The experiments were conducted using an MRI open dataset of 3,762 images. Other performance measures used in the study are the area under precision, recall, and specificity.


2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


Sign in / Sign up

Export Citation Format

Share Document