scholarly journals MRI Image Based Classification Model for Lung Tumor Detection Using Convolutional Neural Networks

2021 ◽  
Vol 38 (6) ◽  
pp. 1837-1842
Author(s):  
Makineni Siddardha Kumar ◽  
Kasukurthi Venkata Rao ◽  
Gona Anil Kumar

Lung tumor is a dangerous disease with the most noteworthy effects and causing more deaths around the world. Medical diagnosis of lung tumor growth can essentially lessen the death rate, on the grounds that powerful treatment alternatives firmly rely upon the particular phase of disease. Medical diagnosis considers to the use of innovation in science with the end goal of analyzing the interior structure of the organs of the human body. It is an approach to improve the nature of the patient's life through a progressively exact and fast detection, and with restricted symptoms, prompting a powerful generally treatment methodology. The main goal of the proposed work is to design a Lung Tumor Detection Model using Convolution Neural Networks (LTD-CNN) with machine learning technique that spread both miniaturized scale and full scale image surfaces experienced in Magnetic Resonance Imaging (MRI) and advanced microscopy modalities separately. Image pixels can give critical data on the abnormality of tissue and performs classification for accurate tumor detection. The advancement of Computer-Aided Diagnosing (CAD) helps the doctors and radiologists to analyze the lung disease precisely from CT images in its beginning phase. Different methods are accessible for the lung disease recognition, however numerous methodologies give not so much exactness but rather more fake positives. The proposed method is compared with the traditional models and the results exhibit that the proposed model detects the tumor effectively and more accurately.

2021 ◽  
Author(s):  
Shreya Hardaha ◽  
Damodar Reddy ◽  
Saidi Reddy Parne

Abstract As of late, Convolutional Neural Networks have been very successful in segmentation and classification tasks. Magnetic resonance imaging (MRI) is a favored medical imaging method that comes up with interesting information for the diagnosis of different diseases.MR method is getting to be exceptionally well-known due to its non-invasive rule and for this reason, automated processing of this sort of image is getting noticed. MRI is effectively and widely used for tumor detection. Brain tumor detection is a popular medical application of MRI. Automating segmentation using CNN assists radiologists to lessen the high manual workload of tumor evaluation. CNN classification accuracy depends on network parameters and training data. CNN has the benefit of learning image features automatically directly out of multi-modal MRI images. In this survey paper, we have presented a summary of CNN's recent advancement in its technique applied on MRI images. The aim of this survey is to discuss various architectures and factors affecting the performance of CNN for learning features from different available MRI datasets. Based on the survey, section III (CNN for MRI analysis) comprises three subsections: A) MRI data and processing, B) CNN dimensionality, C) CNN architectures.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
A. Wong ◽  
Z. Q. Lin ◽  
L. Wang ◽  
A. G. Chung ◽  
B. Shen ◽  
...  

AbstractA critical step in effective care and treatment planning for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the cause for the coronavirus disease 2019 (COVID-19) pandemic, is the assessment of the severity of disease progression. Chest x-rays (CXRs) are often used to assess SARS-CoV-2 severity, with two important assessment metrics being extent of lung involvement and degree of opacity. In this proof-of-concept study, we assess the feasibility of computer-aided scoring of CXRs of SARS-CoV-2 lung disease severity using a deep learning system. Data consisted of 396 CXRs from SARS-CoV-2 positive patient cases. Geographic extent and opacity extent were scored by two board-certified expert chest radiologists (with 20+ years of experience) and a 2nd-year radiology resident. The deep neural networks used in this study, which we name COVID-Net S, are based on a COVID-Net network architecture. 100 versions of the network were independently learned (50 to perform geographic extent scoring and 50 to perform opacity extent scoring) using random subsets of CXRs from the study, and we evaluated the networks using stratified Monte Carlo cross-validation experiments. The COVID-Net S deep neural networks yielded R$$^2$$ 2 of $$0.664 \pm 0.032$$ 0.664 ± 0.032 and $$0.635 \pm 0.044$$ 0.635 ± 0.044 between predicted scores and radiologist scores for geographic extent and opacity extent, respectively, in stratified Monte Carlo cross-validation experiments. The best performing COVID-Net S networks achieved R$$^2$$ 2 of 0.739 and 0.741 between predicted scores and radiologist scores for geographic extent and opacity extent, respectively. The results are promising and suggest that the use of deep neural networks on CXRs could be an effective tool for computer-aided assessment of SARS-CoV-2 lung disease severity, although additional studies are needed before adoption for routine clinical use.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3279
Author(s):  
Maria Habib ◽  
Mohammad Faris ◽  
Raneem Qaddoura ◽  
Manal Alomari ◽  
Alaa Alomari ◽  
...  

Maintaining a high quality of conversation between doctors and patients is essential in telehealth services, where efficient and competent communication is important to promote patient health. Assessing the quality of medical conversations is often handled based on a human auditory-perceptual evaluation. Typically, trained experts are needed for such tasks, as they follow systematic evaluation criteria. However, the daily rapid increase of consultations makes the evaluation process inefficient and impractical. This paper investigates the automation of the quality assessment process of patient–doctor voice-based conversations in a telehealth service using a deep-learning-based classification model. For this, the data consist of audio recordings obtained from Altibbi. Altibbi is a digital health platform that provides telemedicine and telehealth services in the Middle East and North Africa (MENA). The objective is to assist Altibbi’s operations team in the evaluation of the provided consultations in an automated manner. The proposed model is developed using three sets of features: features extracted from the signal level, the transcript level, and the signal and transcript levels. At the signal level, various statistical and spectral information is calculated to characterize the spectral envelope of the speech recordings. At the transcript level, a pre-trained embedding model is utilized to encompass the semantic and contextual features of the textual information. Additionally, the hybrid of the signal and transcript levels is explored and analyzed. The designed classification model relies on stacked layers of deep neural networks and convolutional neural networks. Evaluation results show that the model achieved a higher level of precision when compared with the manual evaluation approach followed by Altibbi’s operations team.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii93-ii93
Author(s):  
Kate Connor ◽  
Emer Conroy ◽  
Kieron White ◽  
Liam Shiels ◽  
William Gallagher ◽  
...  

Abstract Despite magnetic resonance imaging (MRI) being the gold-standard imaging modality in the glioblastoma (GBM) setting, the availability of rodent MRI scanners is relatively limited. CT is a clinically relevant alternative which is more widely available in the pre-clinic. To study the utility of contrast-enhanced (CE)-CT in GBM xenograft modelling, we optimized CT protocols on two instruments (IVIS-SPECTRUM-CT;TRIUMPH-PET/CT) with/without delivery of contrast. As radiomics analysis may facilitate earlier detection of tumors by CT alone, allowing for deeper analyses of tumor characteristics, we established a radiomic pipeline for extraction and selection of tumor specific CT-derived radiomic features (inc. first order statistics/texture features). U87R-Luc2 GBM cells were implanted orthotopically into NOD/SCID mice (n=25) and tumor growth monitored via weekly BLI. Concurrently mice underwent four rounds of CE-CT (IV iomeprol/iopamidol; 50kV-scan). N=45 CE-CT images were semi-automatically delineated and radiomic features were extracted (Pyradiomics 2.2.0) at each imaging timepoint. Differences between normal and tumor tissue were analyzed using recursive selection. Using either CT instrument/contrast, tumors > 0.4cm3 were not detectable until week-9 post-implantation. Radiomic analysis identified three features (waveletHHH_firstorder_Median, original_glcm_Correlation and waveletLHL_firstorder_Median) at week-3 and -6 which may be early indicators of tumor presence. These features are now being assessed in CE-CT scans collected pre- and post-temozolomide treatment in a syngeneic model of mesenchymal GBM. Nevertheless, BLI is significantly more sensitive than CE-CT (either visually or using radiomic-enhanced CT feature extraction) with luciferase-positive tumors detectable at week-1. In conclusion, U87R-Luc2 tumors > 0.4cm3 are only detectable by Week-8 using CE-CT and either CT instrument studied. Nevertheless, radiomic analysis has defined features which may allow for earlier tumor detection at Week-3, thus expanding the utility of CT in the preclinical setting. Overall, this work supports the discovery of putative prognostic pre-clinical CT-derived radiomic signatures which may ultimately be assessed as early disease markers in patient datasets.


Author(s):  
Muhammad Irfan Sharif ◽  
Jian Ping Li ◽  
Javeria Amin ◽  
Abida Sharif

AbstractBrain tumor is a group of anomalous cells. The brain is enclosed in a more rigid skull. The abnormal cell grows and initiates a tumor. Detection of tumor is a complicated task due to irregular tumor shape. The proposed technique contains four phases, which are lesion enhancement, feature extraction and selection for classification, localization, and segmentation. The magnetic resonance imaging (MRI) images are noisy due to certain factors, such as image acquisition, and fluctuation in magnetic field coil. Therefore, a homomorphic wavelet filer is used for noise reduction. Later, extracted features from inceptionv3 pre-trained model and informative features are selected using a non-dominated sorted genetic algorithm (NSGA). The optimized features are forwarded for classification after which tumor slices are passed to YOLOv2-inceptionv3 model designed for the localization of tumor region such that features are extracted from depth-concatenation (mixed-4) layer of inceptionv3 model and supplied to YOLOv2. The localized images are passed toMcCulloch'sKapur entropy method to segment actual tumor region. Finally, the proposed technique is validated on three benchmark databases BRATS 2018, BRATS 2019, and BRATS 2020 for tumor detection. The proposed method achieved greater than 0.90 prediction scores in localization, segmentation and classification of brain lesions. Moreover, classification and segmentation outcomes are superior as compared to existing methods.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 624
Author(s):  
Stefan Rohrmanstorfer ◽  
Mikhail Komarov ◽  
Felix Mödritscher

With the always increasing amount of image data, it has become a necessity to automatically look for and process information in these images. As fashion is captured in images, the fashion sector provides the perfect foundation to be supported by the integration of a service or application that is built on an image classification model. In this article, the state of the art for image classification is analyzed and discussed. Based on the elaborated knowledge, four different approaches will be implemented to successfully extract features out of fashion data. For this purpose, a human-worn fashion dataset with 2567 images was created, but it was significantly enlarged by the performed image operations. The results show that convolutional neural networks are the undisputed standard for classifying images, and that TensorFlow is the best library to build them. Moreover, through the introduction of dropout layers, data augmentation and transfer learning, model overfitting was successfully prevented, and it was possible to incrementally improve the validation accuracy of the created dataset from an initial 69% to a final validation accuracy of 84%. More distinct apparel like trousers, shoes and hats were better classified than other upper body clothes.


Sign in / Sign up

Export Citation Format

Share Document