scholarly journals Automatic Classification of A-Lines in Intravascular OCT Images Using Deep Learning and Estimation of Attenuation Coefficients

2021 ◽  
Vol 11 (16) ◽  
pp. 7412
Author(s):  
Grigorios-Aris Cheimariotis ◽  
Maria Riga ◽  
Kostas Haris ◽  
Konstantinos Toutouzas ◽  
Aggelos K. Katsaggelos ◽  
...  

Intravascular Optical Coherence Tomography (IVOCT) images provide important insight into every aspect of atherosclerosis. Specifically, the extent of plaque and its type, which are indicative of the patient’s condition, are better assessed by OCT images in comparison to other in vivo modalities. A large amount of imaging data per patient require automatic methods for rapid results. An effective step towards automatic plaque detection and plaque characterization is axial lines (A-lines) based classification into normal and various plaque types. In this work, a novel automatic method for A-line classification is proposed. The method employed convolutional neural networks (CNNs) for classification in its core and comprised the following pre-processing steps: arterial wall segmentation and an OCT-specific (depth-resolved) transformation and a post-processing step based on the majority of classifications. The important step was the OCT-specific transformation, which was based on the estimation of the attenuation coefficient in every pixel of the OCT image. The dataset used for training and testing consisted of 183 images from 33 patients. In these images, four different plaque types were delineated. The method was evaluated by cross-validation. The mean values of accuracy, sensitivity and specificity were 74.73%, 87.78%, and 61.45%, respectively, when classifying into plaque and normal A-lines. When plaque A-lines were classified into fibrolipidic and fibrocalcific, the overall accuracy was 83.47% for A-lines of OCT-specific transformed images and 74.94% for A-lines of original images. This large improvement in accuracy indicates the advantage of using attenuation coefficients when characterizing plaque types. The proposed automatic deep-learning pipeline constitutes a positive contribution to the accurate classification of A-lines in intravascular OCT images.

2020 ◽  
Author(s):  
Na Yao ◽  
Fuchuan Ni ◽  
Ziyan Wang ◽  
Jun Luo ◽  
Wing-Kin Sung ◽  
...  

Abstract Background: Peach diseases can cause severe yield reduction and decreased quality for peach production. Rapid and accurate detection and identification of peach diseases is of great importance. Deep learning has been applied to detect peach diseases using imaging data. However, peach disease image data is difficult to collect and samples are imbalance. The popular deep networks perform poor for this issue.Results: This paper proposed an improved Xception network named as L2MXception which ensembles regularization term of L2-norm and mean. With the peach disease image dataset collected, results on seven mainstream deep learning models were compared in details and an improved loss function was integrated with regularization term L2-norm and mean (L2M Loss). Experiments showed that the Xception model with L2M Loss outperformed the current best method for peach disease prediction. Compared to the original Xception model, the validation accuracy of L2MXception was up to 93.85%, increased by 28.48%. Conclusions: The proposed L2MXception network may have great potential in early identification of peach diseases.


2021 ◽  
Author(s):  
Roberto Augusto Philippi Martins ◽  
Danilo Silva

The lack of labeled data is one of the main prohibiting issues on the development of deep learning models, as they rely on large labeled datasets in order to achieve high accuracy in complex tasks. Our objective is to evaluate the performance gain of having additional unlabeled data in the training of a deep learning model when working with medical imaging data. We present a semi-supervised learning algorithm that utilizes a teacher-student paradigm in order to leverage unlabeled data in the classification of chest X-ray images. Using our algorithm on the ChestX-ray14 dataset, we manage to achieve a substantial increase in performance when using small labeled datasets. With our method, a model achieves an AUROC of 0.822 with only 2% labeled data and 0.865 with 5% labeled data, while a fully supervised method achieves an AUROC of 0.807 with 5% labeled data and only 0.845 with 10%.


2021 ◽  
Vol 15 ◽  
Author(s):  
Laura Tomaz Da Silva ◽  
Nathalia Bianchini Esper ◽  
Duncan D. Ruiz ◽  
Felipe Meneguzzi ◽  
Augusto Buchweitz

Problem: Brain imaging studies of mental health and neurodevelopmental disorders have recently included machine learning approaches to identify patients based solely on their brain activation. The goal is to identify brain-related features that generalize from smaller samples of data to larger ones; in the case of neurodevelopmental disorders, finding these patterns can help understand differences in brain function and development that underpin early signs of risk for developmental dyslexia. The success of machine learning classification algorithms on neurofunctional data has been limited to typically homogeneous data sets of few dozens of participants. More recently, larger brain imaging data sets have allowed for deep learning techniques to classify brain states and clinical groups solely from neurofunctional features. Indeed, deep learning techniques can provide helpful tools for classification in healthcare applications, including classification of structural 3D brain images. The adoption of deep learning approaches allows for incremental improvements in classification performance of larger functional brain imaging data sets, but still lacks diagnostic insights about the underlying brain mechanisms associated with disorders; moreover, a related challenge involves providing more clinically-relevant explanations from the neural features that inform classification.Methods: We target this challenge by leveraging two network visualization techniques in convolutional neural network layers responsible for learning high-level features. Using such techniques, we are able to provide meaningful images for expert-backed insights into the condition being classified. We address this challenge using a dataset that includes children diagnosed with developmental dyslexia, and typical reader children.Results: Our results show accurate classification of developmental dyslexia (94.8%) from the brain imaging alone, while providing automatic visualizations of the features involved that match contemporary neuroscientific knowledge (brain regions involved in the reading process for the dyslexic reader group and brain regions associated with strategic control and attention processes for the typical reader group).Conclusions: Our visual explanations of deep learning models turn the accurate yet opaque conclusions from the models into evidence to the condition being studied.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Na Yao ◽  
Fuchuan Ni ◽  
Ziyan Wang ◽  
Jun Luo ◽  
Wing-Kin Sung ◽  
...  

Abstract Background Peach diseases can cause severe yield reduction and decreased quality for peach production. Rapid and accurate detection and identification of peach diseases is of great importance. Deep learning has been applied to detect peach diseases using imaging data. However, peach disease image data is difficult to collect and samples are imbalance. The popular deep networks perform poor for this issue. Results This paper proposed an improved Xception network named as L2MXception which ensembles regularization term of L2-norm and mean. With the peach disease image dataset collected, results on seven mainstream deep learning models were compared in details and an improved loss function was integrated with regularization term L2-norm and mean (L2M Loss). Experiments showed that the Xception model with L2M Loss outperformed the current best method for peach disease prediction. Compared to the original Xception model, the validation accuracy of L2MXception was up to 93.85%, increased by 28.48%. Conclusions The proposed L2MXception network may have great potential in early identification of peach diseases.


2020 ◽  
Author(s):  
Na Yao ◽  
Fuchuan Ni ◽  
Ziyan Wang ◽  
Jun Luo ◽  
Wing-Kin Sung ◽  
...  

Abstract Background: Peach diseases can cause severe yield reduction and decreased quality for peach production. Rapid and accurate detection and identification of peach diseases is of great importance. Deep learning has been applied to detect peach diseases using imaging data. However,peach disease image data is difficult to collect and samples are imbalance. Results: This paper proposed an improved Xception network named as L2MXception which ensembles regularization term of L2-norm and mean. With the peach disease image dataset collected, results on six mainstream deep learning models were compared in details and an improved loss function was integrated with regularization term L2-norm and mean (L2M Loss). Experiments showed that the Xception model with L2M Loss outperformed the current best method for peach disease prediction. Compared to the original Xception model, the validation accuracy of L2MXception was up to 93.85%, increased by 28.48%. Conclusions: The proposed L2MXception network may have great potential in early identification of peach diseases.


2021 ◽  
Author(s):  
Jae Ho Sohn ◽  
Yixin Chen ◽  
Dmytro Lituiev ◽  
Jaewon Yang ◽  
Karen Ordovas ◽  
...  

Abstract Our objective was to develop deep learning models with chest radiograph data to predict healthcare costs and classify top-50% spenders. 21,872 frontal chest radiographs were retrospectively collected from 19,524 patients with at least 1-year spending data. Among the patients, 11,003 patients had 3 years of cost data, and 1678 patients had 5 years of cost data. Model performances were measured with area under the receiver operating characteristic curve (ROC-AUC) for classification of top-50% spenders and Spearman ρ for prediction of healthcare cost. The best model predicting 1-year (N=21,872) expenditure achieved ROC-AUC of 0.806 [95% CI, 0.793-0.819] for top-50% spender classification and ρ of 0.561 [0.536-0.586] for regression. Similarly, for predicting 3-year (N=12,395) expenditure, ROC-AUC of 0.771 [0.750-0.794] and ρ of 0.524 [0.489-0.559]; for predicting 5-year (N=1,779) expenditure ROC-AUC of 0.729 [0.667-0.729] and ρ of 0.424 [0.324-0.529]. Our deep learning model demonstrated the feasibility of predicting health care expenditure as well as classifying top 50% healthcare spenders at 1, 3, and 5 year(s), implying the feasibility of combining deep learning with information-rich imaging data to uncover hidden associations that may allude physicians. Such a model can be a starting point of making an accurate budget in reimbursement models in healthcare industries.


Author(s):  
Mustafa Kara ◽  
Zeynep Öztürk ◽  
Sergin Akpek ◽  
Ayşegül Turupcu

Advancements in deep learning and availability of medical imaging data have led to use of CNN based architectures in disease diagnostic assisted systems. In spite of the abundant use of reverse transcription-polymerase chain reaction (RT-PCR) based tests in COVID-19 diagnosis, CT images offer an applicable supplement with its high sensitivity rates. Here, we study classification of COVID-19 pneumonia (CP) and non-COVID-19 pneumonia (NCP) in chest CT scans using efficient deep learning methods to be readily implemented by any hospital. We report our deep network framework design that encompasses Convolutional Neural Networks (CNNs) and bidirectional Long Short Term Memory (biLSTM) architectures. Our study achieved high specificity (CP: 98.3%, NCP: 96.2% Healthy: 89.3%) and high sensitivity (CP: 84.0%, NCP: 93.9% Healthy: 94.9%) in classifying COVID-19 pneumonia, non-COVID-19 pneumonia and healthy patients. Next, we provide visual explanations for the CNN predictions with gradient-weighted class activation mapping (Grad-CAM). The results provided a model explainability by showing that Ground Glass Opacities (GGO), indicators of COVID-19 pneumonia disease, were captured by our CNN network. Finally, we have implemented our approach in three hospitals proving its compatibility and efficiency.


Sign in / Sign up

Export Citation Format

Share Document