Application of deep learning for clinical predictive modeling: An artificial intelligence recognition in spinal metastases.

2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e18050-e18050 ◽  
Author(s):  
Hui Zhao ◽  
Guangyu Yao ◽  
Yiyi Zhou ◽  
Zhiyu Wang

e18050 Background: Spinal metastases are very common outcomes within solid malignant tumors, which could lead to various skeletal related events (SREs). The accurate and timely diagnosis is the key to improve prognosis. Recently, artificial intelligence(AI) has assisted doctors in many ways by different AI technologies. In this study, we applicated a deep learning model to classify and locate the metastatic lesions on spinal CT images. Methods: We set up a dataset consisting of 800 patients’ spinal CT images, which contained over 300,000 CT slices. And we built a multi-label classification and vertebrae segmentation model to recognize the metastatic lesions on spinal CT images. Then we trained and tested this model within our dataset, using a data augmentation by random flips and random rotations. Sensitivity and specificity were used to evaluate the performance of the model. Results: Our model showed that the diagnostic utilities of normal lesions were: sensitivity 81.7% and specificity 92%; while the diagnostic utilities of metastatic lesions were: sensitivity 84.7% and specificity 84.5%. Conclusions: Our model can effectively and accurately discriminate spinal metastases on spinal CT images. [Table: see text]

2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Wenfa Jiang ◽  
Ganhua Zeng ◽  
Shuo Wang ◽  
Xiaofeng Wu ◽  
Chenyang Xu

Lung cancer is one of the malignant tumors with the highest fatality rate and nearest to our lives. It poses a great threat to human health and it mainly occurs in smokers. In our country, with the acceleration of industrialization, environmental pollution, and population aging, the cancer burden of lung cancer is increasing day by day. In the diagnosis of lung cancer, Computed Tomography (CT) images are a fairly common visualization tool. CT images visualize all tissues based on the absorption of X-rays. The diseased parts of the lung are collectively referred to as pulmonary nodules, the shape of nodules is different, and the risk of cancer will vary with the shape of nodules. Computer-aided diagnosis (CAD) is a very suitable method to solve this problem because the computer vision model can quickly scan every part of the CT image of the same quality for analysis and will not be affected by fatigue and emotion. The latest advances in deep learning enable computer vision models to help doctors diagnose various diseases, and in some cases, models have shown greater competitiveness than doctors. Based on the opportunity of technological development, the application of computer vision in medical imaging diagnosis of diseases has important research significance and value. In this paper, we have used a deep learning-based model on CT images of lung cancer and verified its effectiveness in the timely and accurate prediction of lungs disease. The proposed model has three parts: (i) detection of lung nodules, (ii) False Positive Reduction of the detected nodules to filter out “false nodules,” and (iii) classification of benign and malignant lung nodules. Furthermore, different network structures and loss functions were designed and realized at different stages. Additionally, to fine-tune the proposed deep learning-based mode and improve its accuracy in the detection Lung Nodule Detection, Noudule-Net, which is a detection network structure that combines U-Net and RPN, is proposed. Experimental observations have verified that the proposed scheme has exceptionally improved the expected accuracy and precision ratio of the underlined disease.


Diagnostics ◽  
2020 ◽  
Vol 10 (5) ◽  
pp. 261
Author(s):  
Tae-Young Heo ◽  
Kyoung Min Kim ◽  
Hyun Kyu Min ◽  
Sun Mi Gu ◽  
Jae Hyun Kim ◽  
...  

The use of deep-learning-based artificial intelligence (AI) is emerging in ophthalmology, with AI-mediated differential diagnosis of neovascular age-related macular degeneration (AMD) and dry AMD a promising methodology for precise treatment strategies and prognosis. Here, we developed deep learning algorithms and predicted diseases using 399 images of fundus. Based on feature extraction and classification with fully connected layers, we applied the Visual Geometry Group with 16 layers (VGG16) model of convolutional neural networks to classify new images. Image-data augmentation in our model was performed using Keras ImageDataGenerator, and the leave-one-out procedure was used for model cross-validation. The prediction and validation results obtained using the AI AMD diagnosis model showed relevant performance and suitability as well as better diagnostic accuracy than manual review by first-year residents. These results suggest the efficacy of this tool for early differential diagnosis of AMD in situations involving shortages of ophthalmology specialists and other medical devices.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 9037-9037
Author(s):  
Tao Xu ◽  
Chuoji Huang ◽  
Yaoqi Liu ◽  
Jing Gao ◽  
Huan Chang ◽  
...  

9037 Background: Lung cancer is the most common cancer worldwide. Artificial intelligence (AI) platform using deep learning algorithms have made a remarkable progress in improving diagnostic accuracy of lung cancer. But AI diagnostic performance in identifying benign and malignant pulmonary nodules still needs improvement. We aimed to validate a Pulmonary Nodules Artificial Intelligence Diagnostic System (PNAIDS) by analyzing computed tomography (CT) imaging data. Methods: This real-world, multicentre, diagnostic study was done in five different tier hospitals in China. The CT images of patients, who were aged over 18 years and never had previous anti-cancer treatments, were retrieved from participating hospitals. 534 eligible patients with 5-30mm diameter pulmonary nodules identified by CT were planning to confirm with histopathological diagnosis. The performance of PNAIDS was also compared with respiratory specialists and radiologists with expert or competent degrees of expertise as well as Mayo Clinic’s model by area under the curve (AUC) and evaluated differences by calculating the 95% CIs using the Z-test method. 11 selected participants were tested circulating genetically abnormal cells (CACs) before surgery with doctors suggested. Results: 611 lung CT images from 534 individuals were used to test PNAIDS. The diagnostic accuracy, valued by AUC, in identifying benign and malignant pulmonary nodules was 0.765 (95%CI [0.729 - 0.798]). The diagnostic sensitivity of PNAIDS is 0.630(0.579 – 0.679), specificity is 0.753 (0.693 – 0.807). PNAIDS achieved diagnostic accuracy similar to that of the expert respiratory specialists (AUC difference: 0.0036 [-0.0426 - 0.0497]; p = 0.8801) and superior when compared with Mayo Clinic’s model (0.120 [0.0649 - 0.176], p < 0·0001), expert radiologists (0.0620 [0.0124 - 0.112], p = 0.0142) and competent radiologists (0.0751 [0.0248 - 0.125], p = 0.0034). 11 selected participants were suggested negative in AI results but positive in respiratory specialists’ result. 8 of them were malignant in histopathological diagnosis with tested more than 3 CACs in their blood. Conclusions: PNAIDS achieved high diagnostic accuracy in differential diagnoses between benign and malignant pulmonary nodules, with diagnostic accuracy similar to that of expert respiratory specialists and was superior to that of Mayo Clinic’s model and radiologists. CACs may be able to assist CT-based AI in improving their effectiveness but it still need more data to be proved. Clinical trial information: ChiCTR1900026233.


Author(s):  
Shuai Wang ◽  
Bo Kang ◽  
Jinlu Ma ◽  
Xianjun Zeng ◽  
Mingming Xiao ◽  
...  

Abstract Objective The outbreak of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) has caused more than 26 million cases of Corona virus disease (COVID-19) in the world so far. To control the spread of the disease, screening large numbers of suspected cases for appropriate quarantine and treatment are a priority. Pathogenic laboratory testing is typically the gold standard, but it bears the burden of significant false negativity, adding to the urgent need of alternative diagnostic methods to combat the disease. Based on COVID-19 radiographic changes in CT images, this study hypothesized that artificial intelligence methods might be able to extract specific graphical features of COVID-19 and provide a clinical diagnosis ahead of the pathogenic test, thus saving critical time for disease control. Methods We collected 1065 CT images of pathogen-confirmed COVID-19 cases along with those previously diagnosed with typical viral pneumonia. We modified the inception transfer-learning model to establish the algorithm, followed by internal and external validation. Results The internal validation achieved a total accuracy of 89.5% with a specificity of 0.88 and sensitivity of 0.87. The external testing dataset showed a total accuracy of 79.3% with a specificity of 0.83 and sensitivity of 0.67. In addition, in 54 COVID-19 images, the first two nucleic acid test results were negative, and 46 were predicted as COVID-19 positive by the algorithm, with an accuracy of 85.2%. Conclusion These results demonstrate the proof-of-principle for using artificial intelligence to extract radiological features for timely and accurate COVID-19 diagnosis. Key Points • The study evaluated the diagnostic performance of a deep learning algorithm using CT images to screen for COVID-19 during the influenza season. • As a screening method, our model achieved a relatively high sensitivity on internal and external CT image datasets. • The model was used to distinguish between COVID-19 and other typical viral pneumonia, both of which have quite similar radiologic characteristics.


2020 ◽  
Author(s):  
Pedro Silva ◽  
Eduardo Luz ◽  
Guilherme Silva ◽  
Gladston Moreira ◽  
Rodrigo Silva ◽  
...  

Abstract Early detection and diagnosis are critical factors to control the COVID-19 spreading. A number of deep learning-based methodologies have been recently proposed for COVID-19 screening in CT scans as a tool to automate and help with the diagnosis. To achieve these goals, in this work, we propose a slice voting-based approach extending the EfficientNet Family of deep artificial neural networks.We also design a specific data augmentation process and transfer learning for such task.Moreover, a cross-dataset study is performed into the two largest datasets to date. The proposed method presents comparable results to the state-of-the-art methods and the highest accuracy to date on both datasets (accuracy of 87.60\% for the COVID-CT dataset and accuracy of 98.99% for the SARS-CoV-2 CT-scan dataset). The cross-dataset analysis showed that the generalization power of deep learning models is far from acceptable for the task since accuracy drops from 87.68% to 56.16% on the best evaluation scenario.These results highlighted that the methods that aim at COVID-19 detection in CT-images have to improve significantly to be considered as a clinical option and larger and more diverse datasets are needed to evaluate the methods in a realistic scenario.


2021 ◽  
Author(s):  
Dongchul Cha ◽  
Chongwon Pae ◽  
Se A Lee ◽  
Gina Na ◽  
Young Kyun Hur ◽  
...  

BACKGROUND Deep learning (DL)–based artificial intelligence may have different diagnostic characteristics than human experts in medical diagnosis. As a data-driven knowledge system, heterogeneous population incidence in the clinical world is considered to cause more bias to DL than clinicians. Conversely, by experiencing limited numbers of cases, human experts may exhibit large interindividual variability. Thus, understanding how the 2 groups classify given data differently is an essential step for the cooperative usage of DL in clinical application. OBJECTIVE This study aimed to evaluate and compare the differential effects of clinical experience in otoendoscopic image diagnosis in both computers and physicians exemplified by the class imbalance problem and guide clinicians when utilizing decision support systems. METHODS We used digital otoendoscopic images of patients who visited the outpatient clinic in the Department of Otorhinolaryngology at Severance Hospital, Seoul, South Korea, from January 2013 to June 2019, for a total of 22,707 otoendoscopic images. We excluded similar images, and 7500 otoendoscopic images were selected for labeling. We built a DL-based image classification model to classify the given image into 6 disease categories. Two test sets of 300 images were populated: balanced and imbalanced test sets. We included 14 clinicians (otolaryngologists and nonotolaryngology specialists including general practitioners) and 13 DL-based models. We used accuracy (overall and per-class) and kappa statistics to compare the results of individual physicians and the ML models. RESULTS Our ML models had consistently high accuracies (balanced test set: mean 77.14%, SD 1.83%; imbalanced test set: mean 82.03%, SD 3.06%), equivalent to those of otolaryngologists (balanced: mean 71.17%, SD 3.37%; imbalanced: mean 72.84%, SD 6.41%) and far better than those of nonotolaryngologists (balanced: mean 45.63%, SD 7.89%; imbalanced: mean 44.08%, SD 15.83%). However, ML models suffered from class imbalance problems (balanced test set: mean 77.14%, SD 1.83%; imbalanced test set: mean 82.03%, SD 3.06%). This was mitigated by data augmentation, particularly for low incidence classes, but rare disease classes still had low per-class accuracies. Human physicians, despite being less affected by prevalence, showed high interphysician variability (ML models: kappa=0.83, SD 0.02; otolaryngologists: kappa=0.60, SD 0.07). CONCLUSIONS Even though ML models deliver excellent performance in classifying ear disease, physicians and ML models have their own strengths. ML models have consistent and high accuracy while considering only the given image and show bias toward prevalence, whereas human physicians have varying performance but do not show bias toward prevalence and may also consider extra information that is not images. To deliver the best patient care in the shortage of otolaryngologists, our ML model can serve a cooperative role for clinicians with diverse expertise, as long as it is kept in mind that models consider only images and could be biased toward prevalent diseases even after data augmentation.


2020 ◽  
Author(s):  
Dario Caramelli ◽  
Jaroslaw Granda ◽  
Dario Cambié ◽  
Hessam Mehr ◽  
Alon Henson ◽  
...  

<p><b>We present an artificial intelligence, built to autonomously explore chemical reactions in the laboratory using deep learning. The reactions are performed automatically, analysed online, and the data is processed using a convolutional neural network (CNN) trained on a small reaction dataset to assess the reactivity of reaction mixtures. The network can be used to predict the reactivity of an unknown dataset, meaning that the system is able to abstract the reactivity assignment regardless the identity of the starting materials. The system was set up with 15 inputs that were combined in 1018 reactions, the analysis of which lead to the discovery of a ‘multi-step, single-substrate’ cascade reaction and a new mode of reactivity for methylene isocyanides. <i>p</i>-Toluenesulfonylmethyl isocyanide (TosMIC) in presence of an activator reacts consuming six equivalents of itself to yield a trimeric product in high (unoptimized) yield (47%) with formation of five new C-C bonds involving <i>sp</i>-<i>sp<sup>2</sup></i> and <i>sp</i>-<i>sp<sup>3</sup></i> carbon centres. A cheminformatics analysis reveals that this transformation is both highly unpredictable and able to generate an increase in complexity like a one-pot multicomponent reaction.</b></p>


2020 ◽  
Author(s):  
Dario Caramelli ◽  
Jaroslaw Granda ◽  
Dario Cambié ◽  
Hessam Mehr ◽  
Alon Henson ◽  
...  

<p><b>We present an artificial intelligence, built to autonomously explore chemical reactions in the laboratory using deep learning. The reactions are performed automatically, analysed online, and the data is processed using a convolutional neural network (CNN) trained on a small reaction dataset to assess the reactivity of reaction mixtures. The network can be used to predict the reactivity of an unknown dataset, meaning that the system is able to abstract the reactivity assignment regardless the identity of the starting materials. The system was set up with 15 inputs that were combined in 1018 reactions, the analysis of which lead to the discovery of a ‘multi-step, single-substrate’ cascade reaction and a new mode of reactivity for methylene isocyanides. <i>p</i>-Toluenesulfonylmethyl isocyanide (TosMIC) in presence of an activator reacts consuming six equivalents of itself to yield a trimeric product in high (unoptimized) yield (47%) with formation of five new C-C bonds involving <i>sp</i>-<i>sp<sup>2</sup></i> and <i>sp</i>-<i>sp<sup>3</sup></i> carbon centres. A cheminformatics analysis reveals that this transformation is both highly unpredictable and able to generate an increase in complexity like a one-pot multicomponent reaction.</b></p>


2020 ◽  
Author(s):  
Hüseyin Yaşar ◽  
Murat Ceylan

Abstract The Covid-19 virus outbreak that emerged in China at the end of 2019 caused a huge and devastating effect worldwide. In patients with severe symptoms of the disease, pneumonia develops due to Covid-19 virus. This causes intense involvement and damage in lungs. Although the emergence of the disease occurred a short time ago, many literature studies have been carried out in which these effects of the disease on the lungs were revealed by the help of lung CT imaging. In this study, the amount of 25 lung CT images in total (15 of Covid-19 patients and 10 of normal) was multiplied (250 images in total) using three data augmentation methods which relate to contrast change, brightness change and noise addition, and these images were subjected to automatic classification. Within the scope of the study, experiments were made for each case which include the use of the CT images of lungs (gray-level and RGB) directly, the images obtained by applying Local Binary Pattern (LBP) to these images (gray-level and RGB) and the images obtained by combining these images (gray-level and RGB). In the study, a 23-layer Convolutional Neural Networks (CNN) architecture was developed and used in classification processes. Leave-one-group-out cross validation method was used to test the proposed system. In this context, the result of the study indicated that the best AUC and EER values were obtained for the combination of original (RGB) and LBP applied (RGB) images, and these figures are 0,9811 and 0,0445 respectively. It was observed that, applying LBP to images, the use of CNN input causes an increase in sensitivity values while it causes a decrease in values of specificity. The highest sensitivity was obtained for the case of using LBP-applied (RGB) images and has a value of 0,9947. Within the scope of the study, the highest values of specificity and accuracy were obtained by the help of CT of lungs (gray-level) with 0,9120 and 95,32%, respectively. The results of the study indicate that analyzing images of lung CT using deep learning methods in diagnosing Covid-19 disease will speed up the diagnosis and significantly reduce the burden on healthcare workers.


2020 ◽  
Author(s):  
Syed Usama Khalid Bukhari ◽  
Ubeer Mehtab ◽  
Syed Shahzad Hussain ◽  
Asmara Syed ◽  
Syed Umar Armaghan ◽  
...  

Introduction: Prostatic malignancy is a major cause of morbidity and fatality among men around the globe. More than a million new cases of prostatic cancer are diagnoses annually. The incidence of prostatic malignancy is rising and it is expected that more than two million new cases of prostatic carcinoma will be diagnosed in 2040. The application of machine learning to assist the histopathologists could be a very valuable adjunct tool for the histological diagnosis of prostatic malignant tumors. Aim & Objectives: To evaluate the effectiveness of artificial intelligence for the histopathological diagnosis of prostatic carcinoma by analyzing the digitized pathology slides. Materials & Methods: Eight hundred and two (802) images in total, were obtained from the anonymised slides stained with hematoxylin and eosin which included anonymised 337 images of prostatic adenocarcinoma and 465 anonymised images of nodular hyperplasia of prostate. Eighty percent (80%) of the total digital images were used for training and 20% for testing. Three ResNet architectures ResNet-18, ResNet-34, and ResNet-50 were employed for the analysis of these images. Results: The evaluation of digital images by ResNet-18, ResNet-34, and ResNet-50 revealed the diagnostic accuracy of 97.1%, 98 % and 99.5 % respectively. Discussion: The application of artificial intelligence is being considered as a very useful tool which may improve the patient care by improving the diagnostic accuracy and reducing the cost. In radiology, the application of deep learning to interpret radiological images has revealed excellent results. In the present study, the analysis of pathology images by convolutional neural network architecture revealed the diagnostic accuracy of 97.1%, 98 % and 99.5 % with by ResNet-18, ResNet-34, and ResNet-50 respectively. The findings of the present study are in accordance with the other published series, which were carried out to determine the accuracy of machine learning for the diagnosis of cancers of lung, breast and prostate. The application of deep learning for the histological diagnosis of malignant tumors could be quite helpful in improving the patient care. Conclusion: The findings of the present study suggest that intelligent vision system possibly a worthwhile tool for the histopathological evaluation of prostatic tissue to differentiate between the benign and malignant disorders.


Sign in / Sign up

Export Citation Format

Share Document