scholarly journals Faculty Opinions recommendation of Using artificial intelligence to read chest radiographs for tuberculosis detection: A multi-site evaluation of the diagnostic accuracy of three deep learning systems.

Author(s):  
Anthony Harries ◽  
Kudakwashe C Takarinda
2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Zhi Zhen Qin ◽  
Melissa S. Sander ◽  
Bishwa Rai ◽  
Collins N. Titahong ◽  
Santat Sudrungrot ◽  
...  

Abstract Deep learning (DL) neural networks have only recently been employed to interpret chest radiography (CXR) to screen and triage people for pulmonary tuberculosis (TB). No published studies have compared multiple DL systems and populations. We conducted a retrospective evaluation of three DL systems (CAD4TB, Lunit INSIGHT, and qXR) for detecting TB-associated abnormalities in chest radiographs from outpatients in Nepal and Cameroon. All 1196 individuals received a Xpert MTB/RIF assay and a CXR read by two groups of radiologists and the DL systems. Xpert was used as the reference standard. The area under the curve of the three systems was similar: Lunit (0.94, 95% CI: 0.93–0.96), qXR (0.94, 95% CI: 0.92–0.97) and CAD4TB (0.92, 95% CI: 0.90–0.95). When matching the sensitivity of the radiologists, the specificities of the DL systems were significantly higher except for one. Using DL systems to read CXRs could reduce the number of Xpert MTB/RIF tests needed by 66% while maintaining sensitivity at 95% or better. Using a universal cutoff score resulted different performance in each site, highlighting the need to select scores based on the population screened. These DL systems should be considered by TB programs where human resources are constrained, and automated technology is available.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 9037-9037
Author(s):  
Tao Xu ◽  
Chuoji Huang ◽  
Yaoqi Liu ◽  
Jing Gao ◽  
Huan Chang ◽  
...  

9037 Background: Lung cancer is the most common cancer worldwide. Artificial intelligence (AI) platform using deep learning algorithms have made a remarkable progress in improving diagnostic accuracy of lung cancer. But AI diagnostic performance in identifying benign and malignant pulmonary nodules still needs improvement. We aimed to validate a Pulmonary Nodules Artificial Intelligence Diagnostic System (PNAIDS) by analyzing computed tomography (CT) imaging data. Methods: This real-world, multicentre, diagnostic study was done in five different tier hospitals in China. The CT images of patients, who were aged over 18 years and never had previous anti-cancer treatments, were retrieved from participating hospitals. 534 eligible patients with 5-30mm diameter pulmonary nodules identified by CT were planning to confirm with histopathological diagnosis. The performance of PNAIDS was also compared with respiratory specialists and radiologists with expert or competent degrees of expertise as well as Mayo Clinic’s model by area under the curve (AUC) and evaluated differences by calculating the 95% CIs using the Z-test method. 11 selected participants were tested circulating genetically abnormal cells (CACs) before surgery with doctors suggested. Results: 611 lung CT images from 534 individuals were used to test PNAIDS. The diagnostic accuracy, valued by AUC, in identifying benign and malignant pulmonary nodules was 0.765 (95%CI [0.729 - 0.798]). The diagnostic sensitivity of PNAIDS is 0.630(0.579 – 0.679), specificity is 0.753 (0.693 – 0.807). PNAIDS achieved diagnostic accuracy similar to that of the expert respiratory specialists (AUC difference: 0.0036 [-0.0426 - 0.0497]; p = 0.8801) and superior when compared with Mayo Clinic’s model (0.120 [0.0649 - 0.176], p < 0·0001), expert radiologists (0.0620 [0.0124 - 0.112], p = 0.0142) and competent radiologists (0.0751 [0.0248 - 0.125], p = 0.0034). 11 selected participants were suggested negative in AI results but positive in respiratory specialists’ result. 8 of them were malignant in histopathological diagnosis with tested more than 3 CACs in their blood. Conclusions: PNAIDS achieved high diagnostic accuracy in differential diagnoses between benign and malignant pulmonary nodules, with diagnostic accuracy similar to that of expert respiratory specialists and was superior to that of Mayo Clinic’s model and radiologists. CACs may be able to assist CT-based AI in improving their effectiveness but it still need more data to be proved. Clinical trial information: ChiCTR1900026233.


Information ◽  
2019 ◽  
Vol 10 (2) ◽  
pp. 51 ◽  
Author(s):  
Melanie Mitchell

Today’s AI systems sorely lack the essence of human intelligence: Understanding the situations we experience, being able to grasp their meaning. The lack of humanlike understanding in machines is underscored by recent studies demonstrating lack of robustness of state-of-the-art deep-learning systems. Deeper networks and larger datasets alone are not likely to unlock AI’s “barrier of meaning”; instead the field will need to embrace its original roots as an interdisciplinary science of intelligence.


2017 ◽  
Vol 40 ◽  
Author(s):  
Pierre-Yves Oudeyer

AbstractAutonomous lifelong development and learning are fundamental capabilities of humans, differentiating them from current deep learning systems. However, other branches of artificial intelligence have designed crucial ingredients towards autonomous learning: curiosity and intrinsic motivation, social learning and natural interaction with peers, and embodiment. These mechanisms guide exploration and autonomous choice of goals, and integrating them with deep learning opens stimulating perspectives.


Author(s):  
Angelica Martinez Ochoa

This paper explores how the categorization of images and the searching methods in the Adobe Stock database are culturally situated practices; they are a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform. Understanding the politics behind artificial intelligence, machine learning, and deep learning systems matters now more than ever, as Adobe is already using these technologies across all their products.


The object identification has been most essential field in development of machine vision which should be more efficient and accurate. Machine Learning & Artificial Intelligence, both are on their peak in today’s technology world. Playing with these can leads towards development. The field has actually replaced human efforts. With the approach of profound learning systems (i.e. deep learning techniques), the precision for object identification has expanded radically. This project aims to implement Object Identification for Traffic Analysis System in real time using Deep Learning Algorithms with high accuracy. The differentiation among objects such as humans, Traffic signs, etc. are identified. The dataset is so designed with specific objects which will be recognized by the camera and result will be shown within seconds. The project purely based on deep learning approaches which also includes YOLO object detection & Covolutionary Neural Network (CNN). The resulting system is fast and accurate, therefore can be implemented for smart automation across global stage


Author(s):  
Alex J. DeGrave ◽  
Joseph D. Janizek ◽  
Su-In Lee

AbstractArtificial intelligence (AI) researchers and radiologists have recently reported AI systems that accurately detect COVID-19 in chest radiographs. However, the robustness of these systems remains unclear. Using state-of-the-art techniques in explainable AI, we demonstrate that recent deep learning systems to detect COVID-19 from chest radiographs rely on confounding factors rather than medical pathology, creating an alarming situation in which the systems appear accurate, but fail when tested in new hospitals. We observe that the approach to obtain training data for these AI systems introduces a nearly ideal scenario for AI to learn these spurious “shortcuts.” Because this approach to data collection has also been used to obtain training data for detection of COVID-19 in computed tomography scans and for medical imaging tasks related to other diseases, our study reveals a far-reaching problem in medical imaging AI. In addition, we show that evaluation of a model on external data is insufficient to ensure AI systems rely on medically relevant pathology, since the undesired “shortcuts” learned by AI systems may not impair performance in new hospitals. These findings demonstrate that explainable AI should be seen as a prerequisite to clinical deployment of ML healthcare models.


Sign in / Sign up

Export Citation Format

Share Document