scholarly journals Deep learning neural networks to differentiate Stafne’s bone cavity from pathological radiolucent lesions of the mandible in heterogeneous panoramic radiography

PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254997
Author(s):  
Ari Lee ◽  
Min Su Kim ◽  
Sang-Sun Han ◽  
PooGyeon Park ◽  
Chena Lee ◽  
...  

This study aimed to develop a high-performance deep learning algorithm to differentiate Stafne’s bone cavity (SBC) from cysts and tumors of the jaw based on images acquired from various panoramic radiographic systems. Data sets included 176 Stafne’s bone cavities and 282 odontogenic cysts and tumors of the mandible (98 dentigerous cysts, 91 odontogenic keratocysts, and 93 ameloblastomas) that required surgical removal. Panoramic radiographs were obtained using three different imaging systems. The trained model showed 99.25% accuracy, 98.08% sensitivity, and 100% specificity for SBC classification and resulted in one misclassified SBC case. The algorithm was approved to recognize the typical imaging features of SBC in panoramic radiography regardless of the imaging system when traced back with Grad-Cam and Guided Grad-Cam methods. The deep learning model for SBC differentiating from odontogenic cysts and tumors showed high performance with images obtained from multiple panoramic systems. The present algorithm is expected to be a useful tool for clinicians, as it diagnoses SBCs in panoramic radiography to prevent unnecessary examinations for patients. Additionally, it would provide support for clinicians to determine further examinations or referrals to surgeons for cases where even experts are unsure of diagnosis using panoramic radiography alone.

2020 ◽  
Vol 9 (6) ◽  
pp. 1839
Author(s):  
Hyunwoo Yang ◽  
Eun Jo ◽  
Hyung Jun Kim ◽  
In-ho Cha ◽  
Young-Soo Jung ◽  
...  

Patients with odontogenic cysts and tumors may have to undergo serious surgery unless the lesion is properly detected at the early stage. The purpose of this study is to evaluate the diagnostic performance of the real-time object detecting deep convolutional neural network You Only Look Once (YOLO) v2—a deep learning algorithm that can both detect and classify an object at the same time—on panoramic radiographs. In this study, 1602 lesions on panoramic radiographs taken from 2010 to 2019 at Yonsei University Dental Hospital were selected as a database. Images were classified and labeled into four categories: dentigerous cysts, odontogenic keratocyst, ameloblastoma, and no lesion. Comparative analysis among three groups (YOLO, oral and maxillofacial surgeons, and general practitioners) was done in terms of precision, recall, accuracy, and F1 score. While YOLO ranked highest among the three groups (precision = 0.707, recall = 0.680), the performance differences between the machine and clinicians were statistically insignificant. The results of this study indicate the usefulness of auto-detecting convolutional networks in certain pathology detection and thus morbidity prevention in the field of oral and maxillofacial surgery.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
C Dockerill ◽  
W Woodward ◽  
A McCourt ◽  
A Beqiri ◽  
A Parker ◽  
...  

Abstract Background Stress echocardiography has become established as the most widely applied non-invasive imaging test for diagnosis of coronary artery disease within the UK. However, stress echocardiography has been substantially qualitative, rather than quantitative, based on visual wall motion assessment. For the first time, we have identified and validated quantitative descriptors of cardiac geometry and motion, extracted from ultrasound images acquired using contrast agents in an automated way. Purpose To establish whether these novel imaging features can be generated in an automated, quantifiable and reproducible way from images acquired with perfluoropropane contrast, as well as investigating how these extracted measures compare to those extracted from sulphur hexafluoride contrast and non-contrast studies. Methods 100 patients who received perfluoropropane contrast during their stress echocardiogram were recruited. Their stress echocardiography images were processed through a deep learning algorithm. Novel feature values were recorded and a subset of 10 studies were repeated. The automated measures of global longitudinal strain (GLS) and ejection fraction (EF) extracted from these images were compared to values previously extracted from sulphur hexafluoride contrast and non-contrast images using the same software. Results A full set of 31 novel imaging features were successfully extracted from 79 studies acquired using the perfluoropropane contrast agent with a dropout rate of 14% (n=92, 8 incomplete image sets). Repeated analysis in a subset of 10 perfluoropropane cases demonstrated excellent reproducibility of the extracted feature values (R2=1). Automated values of GLS and EF, at both rest (GLS = −16.4±4.8%, EF = 63±13%) and stress stages (GLS = −17.7±5.8%, EF = 68±11%), were extracted from 83 perfluoropropane studies, with a dropout rate of 16% (n=99, fewer incomplete sets as short axis view not required). The ranges of GLS and EF measures extracted from the perfluoropropane images were comparable to the other contrast studies (n=222) (Rest GLS = −16.8±5.8%, Rest EF = 63±10%; Stress GLS = −19.1±6.7%, Stress EF = 71±9%) and non-contrast studies (n=86) (Rest GLS = −15.7±5.3%, Rest EF = 57±10%; Stress GLS = −17.3±6.4%, Stress EF = 61±14%). Conclusions Novel features and clinically relevant measures were extracted from images acquired using perfluoropropane contrast for the first time in a fully automated and reproducible way using a deep learning algorithm. The analysis failure rate and generated measures are comparable to those extracted from images using other commonly used sulphur hexafluoride contrast agents and non-contrast stress echocardiography studies. These findings demonstrate that deep learning algorithms can be used for automated quantitative analysis of stress echocardiograms acquired using various contrast agents and in non-contrast studies to improve stress echocardiography practice. Funding Acknowledgement Type of funding source: Private company. Main funding source(s): Lantheus Medical Imaging, Inc.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Wei Zhang ◽  
Yang Wang

This study was aimed at exploring the treatment of asthma children with small airway obstruction in CT imaging features of deep learning and glucocorticoid. A total of 145 patients meeting the requirements in hospital were included in this study, and they were randomly assigned to receive aerosolized glucocorticoid ( n = 45 ), aerosolized glucocorticoid combined with bronchodilator ( n = 50 ), or oral steroids ( n = 50 ) for 4 weeks after discharge. The lung function and fractional exhaled nitric oxide (FENO) indexes of the three groups were measured, respectively, and then the effective rates were compared to evaluate the clinical efficacy of glucocorticoids with different administration methods and combined medications in the short-term maintenance treatment after acute exacerbation of asthma. Deep learning algorithm was used for CT image segmentation. The CT image is sent to the workbench for processing on the workbench, and then the convolution operation is performed on each input pixel point during the image processing. After 4 weeks of maintenance treatment, FEF50 %, FEF75 %, and MMEF75/25 increased significantly, and FENO decreased significantly ( P < 0.01 ). The improvement results of FEF50 %, FEF75 %, MMEF75/25, and FENO after maintenance treatment were as follows: the oral hormone group was the most effective, followed by the combined atomization inhalation group, and the hormone atomization inhalation group was the least effective. The differences among them were statistically significant ( P < 0.05 ). The accuracy of artificial intelligence segmentation algorithm was 81%. All the hormones were more effective than local medication in the treatment of small airway function and airway inflammation. In the treatment of aerosol inhalation, the hormone combined with bronchiectasis drug was the most effective in improving small airway obstruction and reducing airway inflammation compared with single drug inhalation. Deep learning CT images are simple, noninvasive, and intuitively observe lung changes in asthma with small airway functional obstruction. Asthma with small airway functional obstruction has high clinical diagnosis and evaluation value.


2020 ◽  
Author(s):  
S. Duchesne ◽  
D. Gourdeau ◽  
P. Archambault ◽  
C. Chartrand-Lefebvre ◽  
L. Dieumegarde ◽  
...  

ABSTRACTBackgroundDecision scores and ethically mindful algorithms are being established to adjudicate mechanical ventilation in the context of potential resources shortage due to the current onslaught of COVID-19 cases. There is a need for a reproducible and objective method to provide quantitative information for those scores.PurposeTowards this goal, we present a retrospective study testing the ability of a deep learning algorithm at extracting features from chest x-rays (CXR) to track and predict radiological evolution.Materials and MethodsWe trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from two open-source datasets (last accessed on April 9, 2020)(Italian Society for Medical and Interventional Radiology and MILA). Data collected form 60 pairs of sequential CXRs from 40 COVID patients (mean age ± standard deviation: 56 ± 13 years; 23 men, 10 women, seven not reported) and were categorized in three categories: “Worse”, “Stable”, or “Improved” on the basis of radiological evolution ascertained from images and reports. Receiver operating characteristic analyses, Mann-Whitney tests were performed.ResultsOn patients from the CheXnet dataset, the area under ROC curves ranged from 0.71 to 0.93 for seven imaging features and one diagnosis. Deep learning features between “Worse” and “Improved” outcome categories were significantly different for three radiological signs and one diagnostic (“Consolidation”, “Lung Lesion”, “Pleural effusion” and “Pneumonia”; all P < 0.05). Features from the first CXR of each pair could correctly predict the outcome category between “Worse” and “Improved” cases with 82.7% accuracy.ConclusionCXR deep learning features show promise for classifying the disease trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions.


2021 ◽  
Vol 22 (18) ◽  
pp. 10019
Author(s):  
Apichat Suratanee ◽  
Kitiporn Plaimas

Functional annotation of unknown function genes reveals unidentified functions that can enhance our understanding of complex genome communications. A common approach for inferring gene function involves the ortholog-based method. However, genetic data alone are often not enough to provide information for function annotation. Thus, integrating other sources of data can potentially increase the possibility of retrieving annotations. Network-based methods are efficient techniques for exploring interactions among genes and can be used for functional inference. In this study, we present an analysis framework for inferring the functions of Plasmodium falciparum genes based on connection profiles in a heterogeneous network between human and Plasmodium falciparum proteins. These profiles were fed into a hybrid deep learning algorithm to predict the orthologs of unknown function genes. The results show high performance of the model’s predictions, with an AUC of 0.89. One hundred and twenty-one predicted pairs with high prediction scores were selected for inferring the functions using statistical enrichment analysis. Using this method, PF3D7_1248700 and PF3D7_0401800 were found to be involved with muscle contraction and striated muscle tissue development, while PF3D7_1303800 and PF3D7_1201000 were found to be related to protein dephosphorylation. In conclusion, combining a heterogeneous network and a hybrid deep learning technique can allow us to identify unknown gene functions of malaria parasites. This approach is generalized and can be applied to other diseases that enhance the field of biomedical science.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Chia-Yen Lee ◽  
Guan-Lin Chen ◽  
Zhong-Xuan Zhang ◽  
Yi-Hong Chou ◽  
Chih-Chung Hsu

The sonogram is currently an effective cancer screening and diagnosis way due to the convenience and harmlessness in humans. Traditionally, lesion boundary segmentation is first adopted and then classification is conducted, to reach the judgment of benign or malignant tumor. In addition, sonograms often contain much speckle noise and intensity inhomogeneity. This study proposes a novel benign or malignant tumor classification system, which comprises intensity inhomogeneity correction and stacked denoising autoencoder (SDAE), and it is suitable for small-size dataset. A classifier is established by extracting features in the multilayer training of SDAE; automatic analysis of imaging features by the deep learning algorithm is applied on image classification, thus allowing the system to have high efficiency and robust distinguishing. In this study, two kinds of dataset (private data and public data) are used for deep learning models training. For each dataset, two groups of test images are compared: the original images and the images after intensity inhomogeneity correction, respectively. The results show that when deep learning algorithm is applied on the sonograms after intensity inhomogeneity correction, there is a significant increase of the tumor distinguishing accuracy. This study demonstrated that it is important to use preprocessing to highlight the image features and further give these features for deep learning models. In this way, the classification accuracy will be better to just use the original images for deep learning.


Lab on a Chip ◽  
2021 ◽  
Author(s):  
Keondo Lee ◽  
Seong-Eun Kim ◽  
Junsang Doh ◽  
Keehoon Kim ◽  
Wan Kyun Chung

The image-activated cell sorter employs a significantly simplified operational procedure based on a syringe connected to a piezoelectric actuator and high-performance inference with TensorRT Integration.


2021 ◽  
Vol 2132 (1) ◽  
pp. 012003
Author(s):  
Song He ◽  
Hao Xue ◽  
Lejiang Guo ◽  
Xin Chen ◽  
Jun Hu

Abstract ABSTRACT.In order to visualize the applications of deep learning based intelligent vehicle in the real field vividly, especially in the unmanned cases in which it realizes the integration of various technologies such as automatic data acquisition, data model construction, automatic curve detection, traffic signs recognition, verification of the unmanned driving, etc. A M-typed Model intelligent vehicle that is embedded with a high-performance board from Baidu named Edge Board is adopted by this study. The vehicle is trained under the PaddlePaddle deep learning frame and Baidu AI Studio Develop platform. Through the autonomous control scheme design and the non-stop study on the deep learning algorithm, an intelligent vehicle model based on PaddlePaddle deep learning is here. The vehicle has the function of automatic driving on the simulated track. In addition, it can distinguish several traffic signs and make feedbacks accordingly.


Sign in / Sign up

Export Citation Format

Share Document