scholarly journals A multi-label classification model for full slice brain computerised tomography image

2020 ◽  
Vol 21 (S6) ◽  
Author(s):  
Jianqiang Li ◽  
Guanghui Fu ◽  
Yueda Chen ◽  
Pengzhi Li ◽  
Bo Liu ◽  
...  

Abstract Background Screening of the brain computerised tomography (CT) images is a primary method currently used for initial detection of patients with brain trauma or other conditions. In recent years, deep learning technique has shown remarkable advantages in the clinical practice. Researchers have attempted to use deep learning methods to detect brain diseases from CT images. Methods often used to detect diseases choose images with visible lesions from full-slice brain CT scans, which need to be labelled by doctors. This is an inaccurate method because doctors detect brain disease from a full sequence scan of CT images and one patient may have multiple concurrent conditions in practice. The method cannot take into account the dependencies between the slices and the causal relationships among various brain diseases. Moreover, labelling images slice by slice spends much time and expense. Detecting multiple diseases from full slice brain CT images is, therefore, an important research subject with practical implications. Results In this paper, we propose a model called the slice dependencies learning model (SDLM). It learns image features from a series of variable length brain CT images and slice dependencies between different slices in a set of images to predict abnormalities. The model is necessary to only label the disease reflected in the full-slice brain scan. We use the CQ500 dataset to evaluate our proposed model, which contains 1194 full sets of CT scans from a total of 491 subjects. Each set of data from one subject contains scans with one to eight different slice thicknesses and various diseases that are captured in a range of 30 to 396 slices in a set. The evaluation results present that the precision is 67.57%, the recall is 61.04%, the F1 score is 0.6412, and the areas under the receiver operating characteristic curves (AUCs) is 0.8934. Conclusion The proposed model is a new architecture that uses a full-slice brain CT scan for multi-label classification, unlike the traditional methods which only classify the brain images at the slice level. It has great potential for application to multi-label detection problems, especially with regard to the brain CT images.

2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Awwal Muhammad Dawud ◽  
Kamil Yurtkan ◽  
Huseyin Oztoprak

In this paper, we address the problem of identifying brain haemorrhage which is considered as a tedious task for radiologists, especially in the early stages of the haemorrhage. The problem is solved using a deep learning approach where a convolutional neural network (CNN), the well-known AlexNet neural network, and also a modified novel version of AlexNet with support vector machine (AlexNet-SVM) classifier are trained to classify the brain computer tomography (CT) images into haemorrhage or nonhaemorrhage images. The aim of employing the deep learning model is to address the primary question in medical image analysis and classification: can a sufficient fine-tuning of a pretrained model (transfer learning) eliminate the need of building a CNN from scratch? Moreover, this study also aims to investigate the advantages of using SVM as a classifier instead of a three-layer neural network. We apply the same classification task to three deep networks; one is created from scratch, another is a pretrained model that was fine-tuned to the brain CT haemorrhage classification task, and our modified novel AlexNet model which uses the SVM classifier. The three networks were trained using the same number of brain CT images available. The experiments show that the transfer of knowledge from natural images to medical images classification is possible. In addition, our results proved that the proposed modified pretrained model “AlexNet-SVM” can outperform a convolutional neural network created from scratch and the original AlexNet in identifying the brain haemorrhage.


2020 ◽  
Vol 31 (6) ◽  
pp. 681-689
Author(s):  
Jalal Mirakhorli ◽  
Hamidreza Amindavar ◽  
Mojgan Mirakhorli

AbstractFunctional magnetic resonance imaging a neuroimaging technique which is used in brain disorders and dysfunction studies, has been improved in recent years by mapping the topology of the brain connections, named connectopic mapping. Based on the fact that healthy and unhealthy brain regions and functions differ slightly, studying the complex topology of the functional and structural networks in the human brain is too complicated considering the growth of evaluation measures. One of the applications of irregular graph deep learning is to analyze the human cognitive functions related to the gene expression and related distributed spatial patterns. Since a variety of brain solutions can be dynamically held in the neuronal networks of the brain with different activity patterns and functional connectivity, both node-centric and graph-centric tasks are involved in this application. In this study, we used an individual generative model and high order graph analysis for the region of interest recognition areas of the brain with abnormal connection during performing certain tasks and resting-state or decompose irregular observations. Accordingly, a high order framework of Variational Graph Autoencoder with a Gaussian distributer was proposed in the paper to analyze the functional data in brain imaging studies in which Generative Adversarial Network is employed for optimizing the latent space in the process of learning strong non-rigid graphs among large scale data. Furthermore, the possible modes of correlations were distinguished in abnormal brain connections. Our goal was to find the degree of correlation between the affected regions and their simultaneous occurrence over time. We can take advantage of this to diagnose brain diseases or show the ability of the nervous system to modify brain topology at all angles and brain plasticity according to input stimuli. In this study, we particularly focused on Alzheimer’s disease.


Lung cancer is a serious illness which leads to increased mortality rate globally. The identification of lung cancer at the beginning stage is the probable method of improving the survival rate of the patients. Generally, Computed Tomography (CT) scan is applied for finding the location of the tumor and determines the stage of cancer. Existing works has presented an effective diagnosis classification model for CT lung images. This paper designs an effective diagnosis and classification model for CT lung images. The presented model involves different stages namely pre-processing, segmentation, feature extraction and classification. The initial stage includes an adaptive histogram based equalization (AHE) model for image enhancement and bilateral filtering (BF) model for noise removal. The pre-processed images are fed into the second stage of watershed segmentation model for effectively segment the images. Then, a deep learning based Xception model is applied for prominent feature extraction and the classification takes place by the use of logistic regression (LR) classifier. A comprehensive simulation is carried out to ensure the effective classification of the lung CT images using a benchmark dataset. The outcome implied the outstanding performance of the presented model on the applied test images.


2020 ◽  
Author(s):  
varan singhrohila ◽  
Nitin Gupta ◽  
Amit Kaul ◽  
Deepak Sharma

<div>The ongoing pandemic of COVID-19 has shown</div><div>the limitations of our current medical institutions. There</div><div>is a need for research in the field of automated diagnosis</div><div>for speeding up the process while maintaining accuracy</div><div>and reducing computational requirements. In this work, an</div><div>automatic diagnosis of COVID-19 infection from CT scans</div><div>of the patients using Deep Learning technique is proposed.</div><div>The proposed model, ReCOV-101 uses full chest CT scans to</div><div>detect varying degrees of COVID-19 infection, and requires</div><div>less computational power. Moreover, in order to improve</div><div>the detection accuracy the CT-scans were preprocessed by</div><div>employing segmentation and interpolation. The proposed</div><div>scheme is based on the residual network, taking advantage</div><div>of skip connection, allowing the model to go deeper.</div><div>Moreover, the model was trained on a single enterpriselevel</div><div>GPU such that it can easily be provided on the edge of</div><div>the network, reducing communication with the cloud often</div><div>required for processing the data. The objective of this work</div><div>is to demonstrate a less hardware-intensive approach for COVID-19 detection with excellent performance that can</div><div>be combined with medical equipment and help ease the</div><div>examination procedure. Moreover, with the proposed model</div><div>an accuracy of 94.9% was achieved.</div>


2021 ◽  
Vol 11 (10) ◽  
pp. 2618-2625
Author(s):  
R. T. Subhalakshmi ◽  
S. Appavu Alias Balamurugan ◽  
S. Sasikala

In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity, specificity, and accuracy.


2020 ◽  
Vol 25 (Supplement_2) ◽  
pp. e25-e25
Author(s):  
Sarah MacEachern ◽  
Deepthi Rajashekar ◽  
Pauline Mouches ◽  
Nathan Rowe ◽  
Emily Mckenna ◽  
...  

Abstract Introduction/Background Autism spectrum disorder (ASD) is a neurodevelopmental disorder resulting in challenges with social communication, sensory differences, and repetitive and restricted patterns of behavior. ASD affects approximately 1 in 66 children in North America, with boys being affected four times more frequently than girls. Currently, diagnosis is made primarily based on clinical features and no robust biomarker for ASD diagnosis has been identified. Potential image-based biomarkers to aid ASD diagnosis may include structural properties of deep gray matter regions in the brain. Objectives The primary objective of this work was to investigate if children with ASD show micro- and macrostructural alterations in deep gray matter structures compared to neurotypical children, and if these biomarkers can be used for an automatic ASD classification using deep learning. Design/Methods Quantitative apparent diffusion coefficient (ADC) magnetic resonance imaging data was obtained from 23 boys with ASD ages 0.8 – 19.6 years (mean 7.6 years) and 39 neurotypical boys ages 0.3 – 17.75 years (mean 7.6 years). An atlas-based method was used for volumetric analysis and extraction of median ADC values for each subject within the cerebral cortex, hippocampus, thalamus, caudate, putamen, globus pallidus, amygdala, and nucleus accumbens. The extracted quantitative regional volumetric and median ADC values were then used for the development and evaluation of an automatic classification method using an artificial neural network. Results The classification model was evaluated using 10-fold cross validation resulting in an overall accuracy of 76%, which is considerably better than chance level (62%). Specifically, 33 neurotypical boys were correctly classified, whereas 6 neurotypical boys were incorrectly classified. For the ASD group, 14 boys were correctly classified, while 9 boys were incorrectly classified. This translates to a precision of 70% for the children with ASD and 79% for neurotypical boys. Conclusion To the best of our knowledge, this is the first method to classify children with ASD using micro- and macrostructural properties of deep gray matter structures in the brain. The first results of the proposed deep learning method to identify children with ASD using image-based biomarkers are promising and could serve as the platform to create a more accurate and robust deep learning model for clinical application.


2018 ◽  
Vol 232 ◽  
pp. 02051 ◽  
Author(s):  
Yan Wang ◽  
Jin Liu ◽  
Bing Yu

Hepatopathy is a kind of disease with high incidence, so the progress in the field of liver disease research is highly valued. Medical image, as an important part of medical diagnosis, provides an important basis for doctors to make correct diagnosis. Pathological images play a significant role in clinical application because they can facilitate doctors to clearly observe the degree of lesions and make accurate judgments. As an important component of computer vision, deep learning has been paid more and more attention by researchers. The application of computer aided technology in medical image detection has become an important application of computer vision. In view of this situation, an automatic diagnosis method of liver pathological images based on deep learning method is proposed. We analyze the image features, then design and train the classification model. The final results confirmed that this method can effectively classify the pathological images of liver and has a high accuracy rate.


Intensification in the occurrence of brain diseases and the need for the initial diagnosis for ailments like Tumor, Alzheimer’s, Epilepsy and Parkinson’s has riveted the attention of researchers. Machine learning practices, specifically deep learning, is considered as a beneficial diagnostic tool. Deep learning approaches to neuroimaging will assist computer-aided analysis of neurological diseases. Feature extraction of neuroimages carried out using Artificial Neural Networks leads to better diagnoses. In this study, all the brain diseases are revisited to consolidate the methodologies carried out by various authors in the literature.


2020 ◽  
Author(s):  
varan singhrohila ◽  
Nitin Gupta ◽  
Amit Kaul ◽  
Deepak Sharma

<div>The ongoing pandemic of COVID-19 has shown</div><div>the limitations of our current medical institutions. There</div><div>is a need for research in the field of automated diagnosis</div><div>for speeding up the process while maintaining accuracy</div><div>and reducing computational requirements. In this work, an</div><div>automatic diagnosis of COVID-19 infection from CT scans</div><div>of the patients using Deep Learning technique is proposed.</div><div>The proposed model, ReCOV-101 uses full chest CT scans to</div><div>detect varying degrees of COVID-19 infection, and requires</div><div>less computational power. Moreover, in order to improve</div><div>the detection accuracy the CT-scans were preprocessed by</div><div>employing segmentation and interpolation. The proposed</div><div>scheme is based on the residual network, taking advantage</div><div>of skip connection, allowing the model to go deeper.</div><div>Moreover, the model was trained on a single enterpriselevel</div><div>GPU such that it can easily be provided on the edge of</div><div>the network, reducing communication with the cloud often</div><div>required for processing the data. The objective of this work</div><div>is to demonstrate a less hardware-intensive approach for COVID-19 detection with excellent performance that can</div><div>be combined with medical equipment and help ease the</div><div>examination procedure. Moreover, with the proposed model</div><div>an accuracy of 94.9% was achieved.</div>


Author(s):  
A. Amyar ◽  
R. Modzelewski ◽  
S. Ruan

ABSTRACTThe fast spreading of the novel coronavirus COVID-19 has aroused worldwide interest and concern, and caused more than one million and a half confirmed cases to date. To combat this spread, medical imaging such as computed tomography (CT) images can be used for diagnostic. An automatic detection tools is necessary for helping screening COVID-19 pneumonia using chest CT imaging. In this work, we propose a multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Our motivation is to leverage useful information contained in multiple related tasks to help improve both segmentation and classification performances. Our architecture is composed by an encoder and two decoders for reconstruction and segmentation, and a multi-layer perceptron for classification. The proposed model is evaluated and compared with other image segmentation and classification techniques using a dataset of 1044 patients including 449 patients with COVID-19, 100 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.78 for the segmentation and an area under the ROC curve higher than 93% for the classification.


Sign in / Sign up

Export Citation Format

Share Document