scholarly journals Diagnostic effect of artificial intelligence solution for referable thoracic abnormalities on chest radiography: a multicenter respiratory outpatient diagnostic cohort study

Author(s):  
Kwang Nam Jin ◽  
Eun Young Kim ◽  
Young Jae Kim ◽  
Gi Pyo Lee ◽  
Hyungjin Kim ◽  
...  

Abstract Objectives We aim ed to evaluate a commercial artificial intelligence (AI) solution on a multicenter cohort of chest radiographs and to compare physicians' ability to detect and localize referable thoracic abnormalities with and without AI assistance. Methods In this retrospective diagnostic cohort study, we investigated 6,006 consecutive patients who underwent both chest radiography and CT. We evaluated a commercially available AI solution intended to facilitate the detection of three chest abnormalities (nodule/masses, consolidation, and pneumothorax) against a reference standard to measure its diagnostic performance. Moreover, twelve physicians, including thoracic radiologists, board-certified radiologists, radiology residents, and pulmonologists, assessed a dataset of 230 randomly sampled chest radiographic images. The images were reviewed twice per physician, with and without AI, with a 4-week washout period. We measured the impact of AI assistance on observer's AUC, sensitivity, specificity, and the area under the alternative free-response ROC (AUAFROC). Results In the entire set (n = 6,006), the AI solution showed average sensitivity, specificity, and AUC of 0.885, 0.723, and 0.867, respectively. In the test dataset (n = 230), the average AUC and AUAFROC across observers significantly increased with AI assistance (from 0.861 to 0.886; p = 0.003 and from 0.797 to 0.822; p = 0.003, respectively). Conclusions The diagnostic performance of the AI solution was found to be acceptable for the images from respiratory outpatient clinics. The diagnostic performance of physicians marginally improved with the use of AI solutions. Further evaluation of AI assistance for chest radiographs using a prospective design is required to prove the efficacy of AI assistance. Key Points • AI assistance for chest radiographs marginally improved physicians’ performance in detecting and localizing referable thoracic abnormalities on chest radiographs. • The detection or localization of referable thoracic abnormalities by pulmonologists and radiology residents improved with the use of AI assistance.

2021 ◽  
pp. 084653712110495
Author(s):  
Tong Wu ◽  
Wyanne Law ◽  
Nayaar Islam ◽  
Charlotte J. Yong-Hing ◽  
Supriya Kulkarni ◽  
...  

Purpose: To gauge the level of interest in breast imaging (BI) and determine factors impacting trainees’ decision to pursue this subspecialty. Methods: Canadian radiology residents and medical students were surveyed from November 2020 to February 2021. Training level, actual vs preferred timing of breast rotations, fellowship choices, perceptions of BI, and how artificial intelligence (AI) will impact BI were collected. Chi-square, Fisher’s exact tests and univariate logistic regression were performed to determine the impact of trainees’ perceptions on interest in pursuing BI/women’s imaging (WI) fellowships. Results: 157 responses from 80 radiology residents and 77 medical students were collected. The top 3 fellowship subspecialties desired by residents were BI/WI (36%), abdominal imaging (35%), and interventional radiology (25%). Twenty-five percent of the medical students were unsure due to lack of exposure. The most common reason that trainees found BI unappealing was repetitiveness (20%), which was associated with lack of interest in BI/WI fellowships (OR = 3.9, 95% CI: 1.6-9.5, P = .002). The most common reason residents found BI appealing was procedures (59%), which was associated with interest in BI/WI fellowships (OR, 3.2, 95% CI, 1.2-8.6, P = .02). Forty percent of residents reported an earlier start of their first breast rotation (PGY1-2) would affect their fellowship choice. Conclusion: This study assessed the current level of Canadian trainees’ interest in BI and identified factors that influenced their decisions to pursue BI. Solutions for increased interest include earlier exposure to breast radiology and addressing inadequacies in residency training.


BMC Cancer ◽  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Daiju Ueda ◽  
Akira Yamamoto ◽  
Akitoshi Shimazaki ◽  
Shannon Leigh Walston ◽  
Toshimasa Matsumoto ◽  
...  

Abstract Background We investigated the performance improvement of physicians with varying levels of chest radiology experience when using a commercially available artificial intelligence (AI)-based computer-assisted detection (CAD) software to detect lung cancer nodules on chest radiographs from multiple vendors. Methods Chest radiographs and their corresponding chest CT were retrospectively collected from one institution between July 2017 and June 2018. Two author radiologists annotated pathologically proven lung cancer nodules on the chest radiographs while referencing CT. Eighteen readers (nine general physicians and nine radiologists) from nine institutions interpreted the chest radiographs. The readers interpreted the radiographs alone and then reinterpreted them referencing the CAD output. Suspected nodules were enclosed with a bounding box. These bounding boxes were judged correct if there was significant overlap with the ground truth, specifically, if the intersection over union was 0.3 or higher. The sensitivity, specificity, accuracy, PPV, and NPV of the readers’ assessments were calculated. Results In total, 312 chest radiographs were collected as a test dataset, including 59 malignant images (59 nodules of lung cancer) and 253 normal images. The model provided a modest boost to the reader’s sensitivity, particularly helping general physicians. The performance of general physicians was improved from 0.47 to 0.60 for sensitivity, from 0.96 to 0.97 for specificity, from 0.87 to 0.90 for accuracy, from 0.75 to 0.82 for PPV, and from 0.89 to 0.91 for NPV while the performance of radiologists was improved from 0.51 to 0.60 for sensitivity, from 0.96 to 0.96 for specificity, from 0.87 to 0.90 for accuracy, from 0.76 to 0.80 for PPV, and from 0.89 to 0.91 for NPV. The overall increase in the ratios of sensitivity, specificity, accuracy, PPV, and NPV were 1.22 (1.14–1.30), 1.00 (1.00–1.01), 1.03 (1.02–1.04), 1.07 (1.03–1.11), and 1.02 (1.01–1.03) by using the CAD, respectively. Conclusion The AI-based CAD was able to improve the ability of physicians to detect nodules of lung cancer in chest radiographs. The use of a CAD model can indicate regions physicians may have overlooked during their initial assessment.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Ilker Ozsahin ◽  
Boran Sekeroglu ◽  
Musa Sani Musa ◽  
Mubarak Taiwo Mustapha ◽  
Dilber Uzun Ozsahin

The COVID-19 diagnostic approach is mainly divided into two broad categories, a laboratory-based and chest radiography approach. The last few months have witnessed a rapid increase in the number of studies use artificial intelligence (AI) techniques to diagnose COVID-19 with chest computed tomography (CT). In this study, we review the diagnosis of COVID-19 by using chest CT toward AI. We searched ArXiv, MedRxiv, and Google Scholar using the terms “deep learning”, “neural networks”, “COVID-19”, and “chest CT”. At the time of writing (August 24, 2020), there have been nearly 100 studies and 30 studies among them were selected for this review. We categorized the studies based on the classification tasks: COVID-19/normal, COVID-19/non-COVID-19, COVID-19/non-COVID-19 pneumonia, and severity. The sensitivity, specificity, precision, accuracy, area under the curve, and F1 score results were reported as high as 100%, 100%, 99.62, 99.87%, 100%, and 99.5%, respectively. However, the presented results should be carefully compared due to the different degrees of difficulty of different classification tasks.


Author(s):  
Johannes Rueckel ◽  
Christian Huemmer ◽  
Andreas Fieselmann ◽  
Florin-Cristian Ghesu ◽  
Awais Mansoor ◽  
...  

Abstract Objectives Diagnostic accuracy of artificial intelligence (AI) pneumothorax (PTX) detection in chest radiographs (CXR) is limited by the noisy annotation quality of public training data and confounding thoracic tubes (TT). We hypothesize that in-image annotations of the dehiscent visceral pleura for algorithm training boosts algorithm’s performance and suppresses confounders. Methods Our single-center evaluation cohort of 3062 supine CXRs includes 760 PTX-positive cases with radiological annotations of PTX size and inserted TTs. Three step-by-step improved algorithms (differing in algorithm architecture, training data from public datasets/clinical sites, and in-image annotations included in algorithm training) were characterized by area under the receiver operating characteristics (AUROC) in detailed subgroup analyses and referenced to the well-established “CheXNet” algorithm. Results Performances of established algorithms exclusively trained on publicly available data without in-image annotations are limited to AUROCs of 0.778 and strongly biased towards TTs that can completely eliminate algorithm’s discriminative power in individual subgroups. Contrarily, our final “algorithm 2” which was trained on a lower number of images but additionally with in-image annotations of the dehiscent pleura achieved an overall AUROC of 0.877 for unilateral PTX detection with a significantly reduced TT-related confounding bias. Conclusions We demonstrated strong limitations of an established PTX-detecting AI algorithm that can be significantly reduced by designing an AI system capable of learning to both classify and localize PTX. Our results are aimed at drawing attention to the necessity of high-quality in-image localization in training data to reduce the risks of unintentionally biasing the training process of pathology-detecting AI algorithms. Key Points • Established pneumothorax-detecting artificial intelligence algorithms trained on public training data are strongly limited and biased by confounding thoracic tubes. • We used high-quality in-image annotated training data to effectively boost algorithm performance and suppress the impact of confounding thoracic tubes. • Based on our results, we hypothesize that even hidden confounders might be effectively addressed by in-image annotations of pathology-related image features.


Author(s):  
Sergio Cavalieri ◽  
Marco Spinetta ◽  
Domenico Zagaria ◽  
Marta Franchi ◽  
Giulia Lavazza ◽  
...  

Abstract Objectives To assess changes in working patterns and education experienced by radiology residents in Northwest Italy during the COVID-19 pandemic. Methods An online questionnaire was sent to residents of 9 postgraduate schools in Lombardy and Piedmont, investigating demographics, changes in radiological workload, involvement in COVID-19-related activities, research, distance learning, COVID-19 contacts and infection, changes in training profile, and impact on psychological wellbeing. Descriptive and χ2 statistics were used. Results Among 373 residents invited, 300 (80%) participated. Between March and April 2020, 44% (133/300) of respondents dedicated their full time to radiology; 41% (124/300) engaged in COVID-19-related activities, 73% (90/124) of whom working in COVID-19 wards; 40% (121/300) dedicated > 25% of time to distance learning; and 66% (199/300) were more involved in research activities than before the pandemic. Over half of residents (57%, 171/300) had contacts with COVID-19-positive subjects, 5% (14/300) were infected, and 8% (23/300) lost a loved one due to COVID-19. Only 1% (3/300) of residents stated that, given the implications of this pandemic scenario, they would not have chosen radiology as their specialty, whereas 7% (22/300) would change their subspecialty. The most common concerns were spreading the infection to their loved ones (30%, 91/300), and becoming sick (7%, 21/300). Positive changes were also noted, such as being more willing to cooperate with other colleagues (36%, 109/300). Conclusions The COVID-19 pandemic changed radiology residents’ training programmes, with distance learning, engaging in COVID-19-related activities, and a greater involvement in research becoming part of their everyday practice. Key Points • Of 300 participants, 44% were fully dedicated to radiological activity and 41% devoted time to COVID-19-related activities, 73% of whom to COVID-19 wards. • Distance learning was substantial for 40% of residents, and 66% were involved in research activities more than before the COVID-19 pandemic. • Over half of residents were exposed to COVID-19 contacts and less than one in twenty was infected.


Cancers ◽  
2021 ◽  
Vol 13 (21) ◽  
pp. 5494
Author(s):  
Hemant Goyal ◽  
Syed A. A. Sherazi ◽  
Rupinder Mann ◽  
Zainab Gandhi ◽  
Abhilash Perisetti ◽  
...  

Gastrointestinal cancers are among the leading causes of death worldwide, with over 2.8 million deaths annually. Over the last few decades, advancements in artificial intelligence technologies have led to their application in medicine. The use of artificial intelligence in endoscopic procedures is a significant breakthrough in modern medicine. Currently, the diagnosis of various gastrointestinal cancer relies on the manual interpretation of radiographic images by radiologists and various endoscopic images by endoscopists. This can lead to diagnostic variabilities as it requires concentration and clinical experience in the field. Artificial intelligence using machine or deep learning algorithms can provide automatic and accurate image analysis and thus assist in diagnosis. In the field of gastroenterology, the application of artificial intelligence can be vast from diagnosis, predicting tumor histology, polyp characterization, metastatic potential, prognosis, and treatment response. It can also provide accurate prediction models to determine the need for intervention with computer-aided diagnosis. The number of research studies on artificial intelligence in gastrointestinal cancer has been increasing rapidly over the last decade due to immense interest in the field. This review aims to review the impact, limitations, and future potentials of artificial intelligence in screening, diagnosis, tumor staging, treatment modalities, and prediction models for the prognosis of various gastrointestinal cancers.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249399
Author(s):  
TaeWoo Kwon ◽  
Sang Pyo Lee ◽  
Dongmin Kim ◽  
Jinseong Jang ◽  
Myungjae Lee ◽  
...  

Objective The chest X-ray (CXR) is the most readily available and common imaging modality for the assessment of pneumonia. However, detecting pneumonia from chest radiography is a challenging task, even for experienced radiologists. An artificial intelligence (AI) model might help to diagnose pneumonia from CXR more quickly and accurately. We aim to develop an AI model for pneumonia from CXR images and to evaluate diagnostic performance with external dataset. Methods To train the pneumonia model, a total of 157,016 CXR images from the National Institutes of Health (NIH) and the Korean National Tuberculosis Association (KNTA) were used (normal vs. pneumonia = 120,722 vs.36,294). An ensemble model of two neural networks with DenseNet classifies each CXR image into pneumonia or not. To test the accuracy of the models, a separate external dataset of pneumonia CXR images (n = 212) from a tertiary university hospital (Gachon University Gil Medical Center GUGMC, Incheon, South Korea) was used; the diagnosis of pneumonia was based on both the chest CT findings and clinical information, and the performance evaluated using the area under the receiver operating characteristic curve (AUC). Moreover, we tested the change of the AI probability score for pneumonia using the follow-up CXR images (7 days after the diagnosis of pneumonia, n = 100). Results When the probability scores of the models that have a threshold of 0.5 for pneumonia, two models (models 1 and 4) having different pre-processing parameters on the histogram equalization distribution showed best AUC performances of 0.973 and 0.960, respectively. As expected, the ensemble model of these two models performed better than each of the classification models with 0.983 AUC. Furthermore, the AI probability score change for pneumonia showed a significant difference between improved cases and aggravated cases (Δ = -0.06 ± 0.14 vs. 0.06 ± 0.09, for 85 improved cases and 15 aggravated cases, respectively, P = 0.001) for CXR taken as a 7-day follow-up. Conclusions The ensemble model combined two different classification models for pneumonia that performed at 0.983 AUC for an external test dataset from a completely different data source. Furthermore, AI probability scores showed significant changes between cases of different clinical prognosis, which suggest the possibility of increased efficiency and performance of the CXR reading at the diagnosis and follow-up evaluation for pneumonia.


2021 ◽  
Vol 3 (1) ◽  
pp. 56-79
Author(s):  
Oguljan Berdiyeva ◽  
Muhammad Umar Islam ◽  
Mitra Saeedi

The use of the traditional system is declined greatly and with a modernization of the accounting and finance process there have been a great deal of change, and these improvements are beneficial to the accounting and finance industry. Adopting Artificial Intelligence applications such as Expert systems for audit and tax, Intelligent Agents for customer service, Machine Learning for decision making, etc. can lead a great benefit by reducing errors and increasing the efficiency of the accounting and finance processes. To keep ensuring a transparent and replicable process, we have conducted a meta-analysis. The database search was between the years 1989-2020 and reviewed 150 research papers. As meta-analysis results show, the majority of researches illustrate a positive effect of the impact of AI systems in the accounting and finance process. Key points:  Meta-Analysis has been applied for emphasizing positive results of the impact of Artificial Intelligence systems in the Accounting and Finance process.  Implementing Artificial Intelligence systems in Accounting and Finance process can increase the efficiency of the process.  Artificial Intelligence technology has been influential in all the areas of accounting, which are especially concerned with knowledge


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Min Liu ◽  
Shimin Wang ◽  
Hu Chen ◽  
Yunsong Liu

Abstract Background Recently, there has been considerable innovation in artificial intelligence (AI) for healthcare. Convolutional neural networks (CNNs) show excellent object detection and classification performance. This study assessed the accuracy of an artificial intelligence (AI) application for the detection of marginal bone loss on periapical radiographs. Methods A Faster region-based convolutional neural network (R-CNN) was trained. Overall, 1670 periapical radiographic images were divided into training (n = 1370), validation (n = 150), and test (n = 150) datasets. The system was evaluated in terms of sensitivity, specificity, the mistake diagnostic rate, the omission diagnostic rate, and the positive predictive value. Kappa (κ) statistics were compared between the system and dental clinicians. Results Evaluation metrics of AI system is equal to resident dentist. The agreement between the AI system and expert is moderate to substantial (κ = 0.547 and 0.568 for bone loss sites and bone loss implants, respectively) for detecting marginal bone loss around dental implants. Conclusions This AI system based on Faster R-CNN analysis of periapical radiographs is a highly promising auxiliary diagnostic tool for peri-implant bone loss detection.


Author(s):  
Daiju Ueda ◽  
Akira Yamamoto ◽  
Shoichi Ehara ◽  
Shinichi Iwata ◽  
Koji Abo ◽  
...  

Abstract Aims We aimed to develop models to detect aortic stenosis (AS) from chest radiographs—one of the most basic imaging tests—with artificial intelligence. Methods and Results We used 10433 retrospectively collected digital chest radiographs from 5638 patients to train, validate, and test three deep learning models. Chest radiographs were collected from patients who had also undergone echocardiography at a single institution between July 2016 and May 2019. These were labelled from the corresponding echocardiography assessments as AS-positive or AS-negative. The radiographs were separated on a patient basis into training (8327 images from 4512 patients, mean age 65 ± [SD] 15 years), validation (1041 images from 563 patients, mean age 65 ± 14 years), and test (1065 images from 563 patients, mean age 65 ± 14 years) datasets. The soft voting-based ensemble of the three developed models had the best overall performance for predicting AS with an AUC, sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of 0.83 (95% CI 0.77–0.88), 0.78 (0.67–0.86), 0.71 (0.68–0.73), 0.71 (0.68–0.74), 0.18 (0.14–0.23), and 0.97 (0.96–0.98), respectively, in the validation dataset and 0.83 (0.78–0.88), 0.83 (0.74–0.90), 0.69 (0.66–0.72), 0.71 (0.68–0.73), 0.23 (0.19–0.28), and 0.97 (0.96–0.98), respectively, in the test dataset. Conclusion Deep learning models using chest radiographs have the potential to differentiate between radiographs of patients with and without AS. Lay summary We created AI models using deep learning to identify aortic stenosis from chest radiographs. Three AI models were developed and evaluated with 10433 retrospectively collected radiographs and labelled from echocardiography reports. The ensemble AI model could detect aortic stenosis in a test dataset with an AUC of 0.83 (95% CI 0.78–0.88). Since chest radiography is a cost effective and widely available imaging test, our model can provide an additive resource for the detection of aortic stenosis.


Sign in / Sign up

Export Citation Format

Share Document