scholarly journals Toward understanding COVID-19 pneumonia: a deep-learning-based approach for severity analysis and monitoring the disease

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mohammadreza Zandehshahvar ◽  
Marly van Assen ◽  
Hossein Maleki ◽  
Yashar Kiarashi ◽  
Carlo N. De Cecco ◽  
...  

AbstractWe report a new approach using artificial intelligence (AI) to study and classify the severity of COVID-19 using 1208 chest X-rays (CXRs) of 396 COVID-19 patients obtained through the course of the disease at Emory Healthcare affiliated hospitals (Atlanta, GA, USA). Using a two-stage transfer learning technique to train a convolutional neural network (CNN), we show that the algorithm is able to classify four classes of disease severity (normal, mild, moderate, and severe) with the average Area Under the Curve (AUC) of 0.93. In addition, we show that the outputs of different layers of the CNN under dominant filters provide valuable insight about the subtle patterns in the CXRs, which can improve the accuracy in the reading of CXRs by a radiologist. Finally, we show that our approach can be used for studying the disease progression in a single patient and its influencing factors. The results suggest that our technique can form the foundation of a more concrete clinical model to predict the evolution of COVID-19 severity and the efficacy of different treatments for each patient through using CXRs and clinical data in the early stages of the disease. This use of AI to assess the severity and possibly predicting the future stages of the disease early on, will be essential in dealing with the upcoming waves of COVID-19 and optimizing resource allocation and treatment.

2020 ◽  
Author(s):  
Mohammadreza Zandehshahvar ◽  
Marly van Assen ◽  
Hossein Maleki ◽  
Yashar Kiarashi ◽  
Carlo N. De Cecco ◽  
...  

ABSTRACTWe report a new approach using artificial intelligence to study and classify the severity of COVID-19 using 1208 chest X-rays (CXRs) of 396 COVID-19 patients obtained through the course of disease at Emory Healthcare affiliated hospitals (Atlanta, GA, USA). Using a two-stage transfer learning technique to train a convolutional neural network (CNN), we show that the algorithm is able to classify four classes of disease severity (normal, mild, moderate, and severe) with average area under curve (AUC) of 0.93. In addition, we show that the outputs of different layers of the CNN under dominant filters provide valuable insight about the subtle patterns in the CXRs, which can improve the accuracy in the reading of CXRs by a radiologist. Finally, we show that our approach can be used for studying the disease progression in single patients and its influencing factors. The results suggest that our technique can form the foundation of a more concrete clinical model to predict the evolution of COVID-19 severity and the efficacy of different treatments for each patient through using CXRs and clinical data in early stages. This will be essential in dealing with the upcoming waves of COVID-19 and optimizing resource allocation and treatment.


Diagnostics ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 530
Author(s):  
Christian Salvatore ◽  
Matteo Interlenghi ◽  
Caterina B. Monti ◽  
Davide Ippolito ◽  
Davide Capra ◽  
...  

We assessed the role of artificial intelligence applied to chest X-rays (CXRs) in supporting the diagnosis of COVID-19. We trained and cross-validated a model with an ensemble of 10 convolutional neural networks with CXRs of 98 COVID-19 patients, 88 community-acquired pneumonia (CAP) patients, and 98 subjects without either COVID-19 or CAP, collected in two Italian hospitals. The system was tested on two independent cohorts, namely, 148 patients (COVID-19, CAP, or negative) collected by one of the two hospitals (independent testing I) and 820 COVID-19 patients collected by a multicenter study (independent testing II). On the training and cross-validation dataset, sensitivity, specificity, and area under the curve (AUC) were 0.91, 0.87, and 0.93 for COVID-19 versus negative subjects, 0.85, 0.82, and 0.94 for COVID-19 versus CAP. On the independent testing I, sensitivity, specificity, and AUC were 0.98, 0.88, and 0.98 for COVID-19 versus negative subjects, 0.97, 0.96, and 0.98 for COVID-19 versus CAP. On the independent testing II, the system correctly diagnosed 652 COVID-19 patients versus negative subjects (0.80 sensitivity) and correctly differentiated 674 COVID-19 versus CAP patients (0.82 sensitivity). This system appears promising for the diagnosis and differential diagnosis of COVID-19, showing its potential as a second opinion tool in conditions of the variable prevalence of different types of infectious pneumonia.


2021 ◽  
pp. 232020682110056
Author(s):  
Kaan Orhan ◽  
Gokhan Yazici ◽  
Mehmet Eray Kolsuz ◽  
Nihan Kafa ◽  
Ibrahim Sevki Bayrakdar ◽  
...  

Aim: The present study is aimed to assess the segmentation success of an artificial intelligence (AI) system based on the deep convolutional neural network (D-CNN) method for the segmentation of masseter muscles on ultrasonography (USG) images. Materials and Methods: This retrospective study was carried out by using the radiology archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry in Ankara University. A total of 195 anonymized USG images were used in this retrospective study. The deep learning process was performed using U-net, Pyramid Scene Parsing Network (PSPNet), and Fuzzy Petri Net (FPN) architectures. Muscle thickness was assessed using USG by manual segmentation and measurements using USG’s software. The neural network model (CranioCatch, Eskisehir-Turkey) was then used to determine the muscles, following automatic measurements of the muscles. Accuracy, ROC area under the curve (AUC), and Precision-Recall Curves (PRC) AUC were calculated in the test dataset and compare a human observer and the AI model. Manual segmentation and measurements were compared statistically with AI ( P < .05). The Mann–Whitney U test was used to analyze whether there is a statistically significant difference between the predicted values and the actual values. Results: The AI models detected and segmented all test muscle data for FPN and U-net, while only two cases of muscles were not detected by PSPNet (false negatives). Accuracies of FPN, PSPNet, and U-net were estimated as 0.985, 0.947, and 0.969, respectively. Receiver operating characteristic scores of FPN, PSPNet, and U-net were estimated as 0.977, 0.934, and 0.969, respectively. The D-CNN measurements of the muscles were similar to manual measurements. There was no significant difference between the two measurement methods in three groups ( P > .05). Conclusion: The proposed AI system approach for the analysis of USG images seems to be promising for automatic masseter muscle segmentation and measurement of thickness. This method can help surgeons, radiologists, and other professionals such as physical therapists in evaluating the time correctly and saving time for diagnosis.


2021 ◽  
Author(s):  
Andrea Chatrian ◽  
Richard T. Colling ◽  
Lisa Browning ◽  
Nasullah Khalid Alham ◽  
Korsuk Sirinukunwattana ◽  
...  

AbstractThe use of immunohistochemistry in the reporting of prostate biopsies is an important adjunct when the diagnosis is not definite on haematoxylin and eosin (H&E) morphology alone. The process is however inherently inefficient with delays while waiting for pathologist review to make the request and duplicated effort reviewing a case more than once. In this study, we aimed to capture the workflow implications of immunohistochemistry requests and demonstrate a novel artificial intelligence tool to identify cases in which immunohistochemistry (IHC) is required and generate an automated request. We conducted audits of the workflow for prostate biopsies in order to understand the potential implications of automated immunohistochemistry requesting and collected prospective cases to train a deep neural network algorithm to detect tissue regions that presented ambiguous morphology on whole slide images. These ambiguous foci were selected on the basis of the pathologist requesting immunohistochemistry to aid diagnosis. A gradient boosted trees classifier was then used to make a slide-level prediction based on the outputs of the neural network prediction. The algorithm was trained on annotations of 219 immunohistochemistry-requested and 80 control images, and tested by threefold cross-validation. Validation was conducted on a separate validation dataset of 222 images. Non IHC-requested cases were diagnosed in 17.9 min on average, while IHC-requested cases took 33.4 min over multiple reporting sessions. We estimated 11 min could be saved on average per case by automated IHC requesting, by removing duplication of effort. The tool attained 99% accuracy and 0.99 Area Under the Curve (AUC) on the test data. In the validation, the average agreement with pathologists was 0.81, with a mean AUC of 0.80. We demonstrate the proof-of-principle that an AI tool making automated immunohistochemistry requests could create a significantly leaner workflow and result in pathologist time savings.


2021 ◽  
Author(s):  
Andrea Chatrian ◽  
Richard Colling ◽  
Lisa Browning ◽  
Nasullah Khalid Alham ◽  
Korsuk Sirinukunwattana ◽  
...  

The use of immunohistochemistry in the reporting of prostate biopsies is an important adjunct when the diagnosis is not definite on haematoxylin and eosin (H&E) morphology alone. The process is however inherently inefficient with delays while waiting for pathologist review to make the request and duplicated effort reviewing a case more than once. In this study, we aimed to capture the workflow implications of immunohistochemistry requests and demonstrate a novel artificial intelligence tool to identify cases in which immunohistochemistry (IHC) is required and generate an automated request. We conducted audits of the workflow for prostate biopsies in order to understand the potential implications of automated immunohistochemistry requesting and collected prospective cases to train a deep neural network algorithm to detect tissue regions that presented ambiguous morphology on whole slide images. These ambiguous foci were selected on the basis of the pathologist requesting immunohistochemistry to aid diagnosis. A gradient boosted trees classifier was then used to make a slide level prediction based on the outputs of the neural network prediction. The algorithm was trained on annotations of 219 immunohistochemistry-requested and 80 control images, and tested by 3-fold cross-validation. Validation was conducted on a separate validation dataset of 212 images. Non IHC-requested cases were diagnosed in 17.9 minutes on average, while IHC-requested cases took 33.4 minutes over multiple reporting sessions. We estimated 11 minutes could be saved on average per case by automated IHC requesting, by removing duplication of effort. The tool attained 99% accuracy and 0.99 Area Under the Curve (AUC) on the test data. In the validation, the average agreement with pathologists was 0.81, with a mean AUC of 0.80. We demonstrate the proof-of-principle that an AI tool making automated immunohistochemistry requests could create a significantly leaner workflow and result in pathologist time savings.


2018 ◽  
Vol 226 ◽  
pp. 04042
Author(s):  
Marko Petkovic ◽  
Marija Blagojevic ◽  
Vladimir Mladenovic

In this paper, we introduce a new approach in food processing using an artificial intelligence. The main focus is simulation of production of spreads and chocolate as representative confectionery products. This approach aids to speed up, model, optimize, and predict the parameters of food processing trying to increase quality of final products. An artificial intelligence is used in field of neural networks and methods of decisions.


Author(s):  
Isabella Castiglioni ◽  
Davide Ippolito ◽  
Matteo Interlenghi ◽  
Caterina Beatrice Monti ◽  
Christian Salvatore ◽  
...  

AbstractObjectivesWe tested artificial intelligence (AI) to support the diagnosis of COVID-19 using chest X-ray (CXR). Diagnostic performance was computed for a system trained on CXRs of Italian subjects from two hospitals in Lombardy, Italy.MethodsWe used for training and internal testing an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals. We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as reference standard.ResultsAt 10-fold cross-validation, our AI model classified COVID-19 and non COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74–0.81), 0.82 specificity (95% CI 0.78–0.85) and 0.89 area under the curve (AUC) (95% CI 0.86–0.91). For the independent dataset, AI showed 0.80 sensitivity (95% CI 0.72–0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73–0.87), and 0.81 AUC (95% CI 0.73– 0.87). Radiologists’ reading obtained 0.63 sensitivity (95% CI 0.52–0.74) and 0.78 specificity (95% CI 0.61–0.90) in one centre and 0.64 sensitivity (95% CI 0.52–0.74) and 0.86 specificity (95% CI 0.71–0.95) in the other.ConclusionsThis preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of AI for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.Key pointsArtificial intelligence based on convolutional neural networks was preliminary applied to chest-X-rays of patients suspected to be infected by COVID-19.Convolutional neural networks trained on a limited dataset of 250 COVID-19 and 250 non-COVID-19 were tested on an independent dataset of 110 patients suspected for COVID-19 infection and provided a balanced performance with 0.80 sensitivity and 0.81 specificity.Training on larger multi-institutional datasets may allow this tool to increase its performance.


2021 ◽  
Vol 11 ◽  
Author(s):  
Shanglong Liu ◽  
Yuejuan Zhang ◽  
Yiheng Ju ◽  
Ying Li ◽  
Xiaoning Kang ◽  
...  

Tumor budding is considered a sign of cancer cell activity and the first step of tumor metastasis. This study aimed to establish an automatic diagnostic platform for rectal cancer budding pathology by training a Faster region-based convolutional neural network (F-R-CNN) on the pathological images of rectal cancer budding. Postoperative pathological section images of 236 patients with rectal cancer from the Affiliated Hospital of Qingdao University, China, taken from January 2015 to January 2017 were used in the analysis. The tumor site was labeled in Label image software. The images of the learning set were trained using Faster R-CNN to establish an automatic diagnostic platform for tumor budding pathology analysis. The images of the test set were used to verify the learning outcome. The diagnostic platform was evaluated through the receiver operating characteristic (ROC) curve. Through training on pathological images of tumor budding, an automatic diagnostic platform for rectal cancer budding pathology was preliminarily established. The precision–recall curves were generated for the precision and recall of the nodule category in the training set. The area under the curve = 0.7414, which indicated that the training of Faster R-CNN was effective. The validation in the validation set yielded an area under the ROC curve of 0.88, indicating that the established artificial intelligence platform performed well at the pathological diagnosis of tumor budding. The established Faster R-CNN deep neural network platform for the pathological diagnosis of rectal cancer tumor budding can help pathologists make more efficient and accurate pathological diagnoses.


Thorax ◽  
2020 ◽  
Vol 75 (4) ◽  
pp. 306-312 ◽  
Author(s):  
David R Baldwin ◽  
Jennifer Gustafson ◽  
Lyndsey Pickup ◽  
Carlos Arteta ◽  
Petr Novotny ◽  
...  

BackgroundEstimation of the risk of malignancy in pulmonary nodules detected by CT is central in clinical management. The use of artificial intelligence (AI) offers an opportunity to improve risk prediction. Here we compare the performance of an AI algorithm, the lung cancer prediction convolutional neural network (LCP-CNN), with that of the Brock University model, recommended in UK guidelines.MethodsA dataset of incidentally detected pulmonary nodules measuring 5–15 mm was collected retrospectively from three UK hospitals for use in a validation study. Ground truth diagnosis for each nodule was based on histology (required for any cancer), resolution, stability or (for pulmonary lymph nodes only) expert opinion. There were 1397 nodules in 1187 patients, of which 234 nodules in 229 (19.3%) patients were cancer. Model discrimination and performance statistics at predefined score thresholds were compared between the Brock model and the LCP-CNN.ResultsThe area under the curve for LCP-CNN was 89.6% (95% CI 87.6 to 91.5), compared with 86.8% (95% CI 84.3 to 89.1) for the Brock model (p≤0.005). Using the LCP-CNN, we found that 24.5% of nodules scored below the lowest cancer nodule score, compared with 10.9% using the Brock score. Using the predefined thresholds, we found that the LCP-CNN gave one false negative (0.4% of cancers), whereas the Brock model gave six (2.5%), while specificity statistics were similar between the two models.ConclusionThe LCP-CNN score has better discrimination and allows a larger proportion of benign nodules to be identified without missing cancers than the Brock model. This has the potential to substantially reduce the proportion of surveillance CT scans required and thus save significant resources.


Leonardo ◽  
2019 ◽  
Vol 52 (4) ◽  
pp. 357-363 ◽  
Author(s):  
Weili Shi

Terra Mars presents artistic renderings of Mars with visual reference to our very own planet Earth. The author trained an artificial neural network with topographical data and satellite imagery of Earth so that it can learn the relation between them. The author then applied the trained model to topographical data of Mars to generate images that resemble satellite imagery of Earth. This project suggests a new approach to creative applications of artificial intelligence—using its capability of remapping to broaden the domain of artistic imagination.


Sign in / Sign up

Export Citation Format

Share Document