Artificial intelligence-based diagnostic system classifying gastric cancers and ulcers: comparison between the original and newly developed systems

Endoscopy ◽  
2020 ◽  
Vol 52 (12) ◽  
pp. 1077-1083 ◽  
Author(s):  
Ken Namikawa ◽  
Toshiaki Hirasawa ◽  
Kaoru Nakano ◽  
Yohei Ikenoyama ◽  
Mitsuaki Ishioka ◽  
...  

Abstract Background We previously reported for the first time the usefulness of artificial intelligence (AI) systems in detecting gastric cancers. However, the “original convolutional neural network (O-CNN)” employed in the previous study had a relatively low positive predictive value (PPV). Therefore, we aimed to develop an advanced AI-based diagnostic system and evaluate its applicability for the classification of gastric cancers and gastric ulcers. Methods We constructed an “advanced CNN” (A-CNN) by adding a new training dataset (4453 gastric ulcer images from 1172 lesions) to the O-CNN, which had been trained using 13 584 gastric cancer and 373 gastric ulcer images. The diagnostic performance of the A-CNN in terms of classifying gastric cancers and ulcers was retrospectively evaluated using an independent validation dataset (739 images from 100 early gastric cancers and 720 images from 120 gastric ulcers) and compared with that of the O-CNN by estimating the overall classification accuracy. Results The sensitivity, specificity, and PPV of the A-CNN in classifying gastric cancer at the lesion level were 99.0 % (95 % confidence interval [CI] 94.6 %−100 %), 93.3 % (95 %CI 87.3 %−97.1 %), and 92.5 % (95 %CI 85.8 %−96.7 %), respectively, and for classifying gastric ulcers were 93.3 % (95 %CI 87.3 %−97.1 %), 99.0 % (95 %CI 94.6 %−100 %), and 99.1 % (95 %CI 95.2 %−100 %), respectively. At the lesion level, the overall accuracies of the O- and A-CNN for classifying gastric cancers and gastric ulcers were 45.9 % (gastric cancers 100 %, gastric ulcers 0.8 %) and 95.9 % (gastric cancers 99.0 %, gastric ulcers 93.3 %), respectively. Conclusion The newly developed AI-based diagnostic system can effectively classify gastric cancers and gastric ulcers.

2019 ◽  
Vol 89 (6) ◽  
pp. AB74
Author(s):  
Ken Namikawa ◽  
Toshiaki Hirasawa ◽  
Yohei Ikenoyama ◽  
Mitsuaki Ishioka ◽  
Atsuko Tamashiro ◽  
...  

Diagnostics ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 530
Author(s):  
Christian Salvatore ◽  
Matteo Interlenghi ◽  
Caterina B. Monti ◽  
Davide Ippolito ◽  
Davide Capra ◽  
...  

We assessed the role of artificial intelligence applied to chest X-rays (CXRs) in supporting the diagnosis of COVID-19. We trained and cross-validated a model with an ensemble of 10 convolutional neural networks with CXRs of 98 COVID-19 patients, 88 community-acquired pneumonia (CAP) patients, and 98 subjects without either COVID-19 or CAP, collected in two Italian hospitals. The system was tested on two independent cohorts, namely, 148 patients (COVID-19, CAP, or negative) collected by one of the two hospitals (independent testing I) and 820 COVID-19 patients collected by a multicenter study (independent testing II). On the training and cross-validation dataset, sensitivity, specificity, and area under the curve (AUC) were 0.91, 0.87, and 0.93 for COVID-19 versus negative subjects, 0.85, 0.82, and 0.94 for COVID-19 versus CAP. On the independent testing I, sensitivity, specificity, and AUC were 0.98, 0.88, and 0.98 for COVID-19 versus negative subjects, 0.97, 0.96, and 0.98 for COVID-19 versus CAP. On the independent testing II, the system correctly diagnosed 652 COVID-19 patients versus negative subjects (0.80 sensitivity) and correctly differentiated 674 COVID-19 versus CAP patients (0.82 sensitivity). This system appears promising for the diagnosis and differential diagnosis of COVID-19, showing its potential as a second opinion tool in conditions of the variable prevalence of different types of infectious pneumonia.


2020 ◽  
Author(s):  
changli tu ◽  
Guojie Wang ◽  
Cuiyan Tan ◽  
Meizhu Chen ◽  
Zijun Xiang ◽  
...  

Abstract Background Coronavirus disease 2019 (COVID-19) is a worldwide public health pandemic with a high mortality rate, among severe cases. The disease is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus. It is important to ensure early detection of the virus to curb disease progression to severe COVID-19. This study aimed to establish a clinical-nomogram model to predict the progression to severe COVID-19 in a timely, efficient manner. Methods This retrospective study included 202 patients with COVID-19 who were admitted to the Fifth Affiliated Hospital of Sun Yat-sen University and Shiyan Taihe Hospital from January 17 to April 30, 2020. The patients were randomly assigned to the training dataset (n = 163, with 43 progressing to severe COVID-19) or the validation dataset (n = 39, with 10 progressing to severe COVID-19) at a ratio of 8:2. The optimal subset algorithm was applied to filter for the clinical factors most relevant to the disease progression. Based on these factors, the logistic regression model was fit to distinguish severe (including severe and critical cases) from non-severe (including mild and moderate cases) COVID-19. Sensitivity, specificity, and area under the curve (AUC) were calculated using the R software package to evaluate prediction performance. A clinical nomogram was established and performance assessed with the discrimination curve. Results Risk factors, including demographics data, symptoms, laboratory and image findings were recorded for the 202 patients. Eight of the 52 variables that were entered into the selection process were selected via the best subset algorithm to establish the predictive model; they included gender, age, BMI, CRP, D-dimer, TP, ALB, and involved-lobe. Sensitivity, specificity and AUC were 0.91, 0.84 and 0.86 for the training dataset, and 0.87, 0.66, and 0.80 for the validation dataset. Conclusions We established an efficient and reliable clinical nomogram model which showed that gender, age, and initial indexes including BMI, CRP, D-dimer, involved-lobe, TP, and ALB could predict the risk of progression to severe COVID-19.


2021 ◽  
Author(s):  
Ying-Shi Sun ◽  
Yu-Hong Qu ◽  
Dong Wang ◽  
Yi Li ◽  
Lin Ye ◽  
...  

Abstract Background: Computer-aided diagnosis using deep learning algorithms has been initially applied in the field of mammography, but there is no large-scale clinical application.Methods: This study proposed to develop and verify an artificial intelligence model based on mammography. Firstly, retrospectively collected mammograms from six centers were randomized to a training dataset and a validation dataset for establishing the model. Secondly, the model was tested by comparing 12 radiologists’ performance with and without it. Finally, prospectively multicenter mammograms were diagnosed by radiologists with the model. The detection and diagnostic capabilities were evaluated using the free-response receiver operating characteristic (FROC) curve and ROC curve.Results: The sensitivity of model for detecting lesion after matching was 0.908 for false positive rate of 0.25 in unilateral images. The area under ROC curve (AUC) to distinguish the benign from malignant lesions was 0.855 (95% CI: 0.830, 0.880). The performance of 12 radiologists with the model was higher than that of radiologists alone (AUC: 0.852 vs. 0.808, P = 0.005). The mean reading time of with the model was shorter than that of reading alone (80.18 s vs. 62.28 s, P = 0.03). In prospective application, the sensitivity of detection reached 0.887 at false positive rate of 0.25; the AUC of radiologists with the model was 0.983 (95% CI: 0.978, 0.988), with sensitivity, specificity, PPV, and NPV of 94.36%, 98.07%, 87.76%, and 99.09%, respectively.Conclusions: The artificial intelligence model exhibits high accuracy for detecting and diagnosing breast lesions, improves diagnostic accuracy and saves time.Trial registration: NCT, NCT03708978. Registered 17 April 2018, https://register.clinicaltrials.gov/prs/app/ NCT03708978


1976 ◽  
Vol 62 (1) ◽  
pp. 39-46 ◽  
Author(s):  
Antonio Russo ◽  
Giuseppe Grasso ◽  
Giuseppe Sanfilippo ◽  
Giorgio Giannone

A histological examination of samples of 131 chronic gastric ulcers, 9 polyps and 12 cases of mucosal atrophy taken by means of a endoscope showed 3 border-line lesions and 4 early gastric cancers. The histological patterns of these lesions are described and the difficulty of histological diagnoses in early malignancy are emphasized.


Author(s):  
James P. Howard ◽  
Catherine C. Stowell ◽  
Graham D. Cole ◽  
Kajaluxy Ananthan ◽  
Camelia D. Demetrescu ◽  
...  

Background: Artificial intelligence (AI) for echocardiography requires training and validation to standards expected of humans. We developed an online platform and established the Unity Collaborative to build a dataset of expertise from 17 hospitals for training, validation, and standardization of such techniques. Methods: The training dataset consisted of 2056 individual frames drawn at random from 1265 parasternal long-axis video-loops of patients undergoing clinical echocardiography in 2015 to 2016. Nine experts labeled these images using our online platform. From this, we trained a convolutional neural network to identify keypoints. Subsequently, 13 experts labeled a validation dataset of the end-systolic and end-diastolic frame from 100 new video-loops, twice each. The 26-opinion consensus was used as the reference standard. The primary outcome was precision SD, the SD of the differences between AI measurement and expert consensus. Results: In the validation dataset, the AI’s precision SD for left ventricular internal dimension was 3.5 mm. For context, precision SD of individual expert measurements against the expert consensus was 4.4 mm. Intraclass correlation coefficient between AI and expert consensus was 0.926 (95% CI, 0.904–0.944), compared with 0.817 (0.778–0.954) between individual experts and expert consensus. For interventricular septum thickness, precision SD was 1.8 mm for AI (intraclass correlation coefficient, 0.809; 0.729–0.967), versus 2.0 mm for individuals (intraclass correlation coefficient, 0.641; 0.568–0.716). For posterior wall thickness, precision SD was 1.4 mm for AI (intraclass correlation coefficient, 0.535 [95% CI, 0.379–0.661]), versus 2.2 mm for individuals (0.366 [0.288–0.462]). We present all images and annotations. This highlights challenging cases, including poor image quality and tapered ventricles. Conclusions: Experts at multiple institutions successfully cooperated to build a collaborative AI. This performed as well as individual experts. Future echocardiographic AI research should use a consensus of experts as a reference. Our collaborative welcomes new partners who share our commitment to publish all methods, code, annotations, and results openly.


2021 ◽  
Author(s):  
Ionut Cosmin Sandric ◽  
Viorel Ilinca ◽  
Radu Irimia ◽  
Zenaida Chitu ◽  
Marta Jurchescu ◽  
...  

<p>Rapid mapping of landslides plays an important role in both science and emergency management communities. It helps people to take the appropriate decisions in quasi-real-time and to diminish losses. With the increasing advancement in high-resolution satellite and aerial imagery, this task also increased the spatial accuracy, providing more and more accurate maps of landslide locations. In accordance with the latest developments in the fields of unmanned aerial vehicles and artificial intelligence, the current study is focused on providing an insight into the process of mapping landslides from full-motion videos and by means of artificial intelligence. To achieve this goal, several drone flights were performed over areas located in the Romanian Subcarpathians, using Quadro-Copters (DJI Phantom 4 and DJI Mavic 2 Enterprise) equipped with a 12 MP RGB camera. The flights were planned and executed to reach an optimal number of pictures and videos, taken from various angles and heights over the study areas. Using Structure from Motion techniques, each dataset was processed and orthorectified. Similarly, each video was processed and transformed into a full-motion video, having coordinates allocated to each frame. Samples of specific landslide features were collected by hand, using the pictures and the video frames, and used to create a complete database necessary to train a Mask RCNN model. The samples were divided into two different datasets, having 80% of them used for the training process and the rest of 20% for the validation process. The model was trained over 50 epochs and it reached an accuracy of approximately 86% on the training dataset and about 82% on the validation dataset. The study is part of an ongoing project, SlideMap 416PED, financed by UEFISCDI, Romania. More details about the project can be found at https://slidemap.geo-spatial.ro.</p>


Endoscopy ◽  
2021 ◽  
Author(s):  
Yohei Ikenoyama ◽  
Toshiyuki Yoshio ◽  
Junki Tokura ◽  
Sakiko Naito ◽  
Ken Namikawa ◽  
...  

Abstract Background It is known that an esophagus with multiple Lugol-voiding lesions (LVLs) after iodine staining is high risk for esophageal cancer; however, it is preferable to identify high-risk cases without staining because iodine causes discomfort and prolongs examination times. This study assessed the capability of an artificial intelligence (AI) system to predict multiple LVLs from images that had not been stained with iodine as well as patients at high risk for esophageal cancer. Methods We constructed the AI system by preparing a training set of 6634 images from white-light and narrow-band imaging in 595 patients before they underwent endoscopic examination with iodine staining. Diagnostic performance was evaluated on an independent validation dataset (667 images from 72 patients) and compared with that of 10 experienced endoscopists. Results The sensitivity, specificity, and accuracy of the AI system to predict multiple LVLs were 84.4 %, 70.0 %, and 76.4 %, respectively, compared with 46.9 %, 77.5 %, and 63.9 %, respectively, for the endoscopists. The AI system had significantly higher sensitivity than 9/10 experienced endoscopists. We also identified six endoscopic findings that were significantly more frequent in patients with multiple LVLs; however, the AI system had greater sensitivity than these findings for the prediction of multiple LVLs. Moreover, patients with AI-predicted multiple LVLs had significantly more cancers in the esophagus and head and neck than patients without predicted multiple LVLs. Conclusion The AI system could predict multiple LVLs with high sensitivity from images without iodine staining. The system could enable endoscopists to apply iodine staining more judiciously.


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Vitor Mendes Pereira ◽  
Yoni Donner ◽  
Gil Levi ◽  
Nicole Cancelliere ◽  
Erez Wasserman ◽  
...  

Cerebral Aneurysms (CAs) may occur in 5-10% of the population. They can be often missed because they require a very methodological diagnostic approach. We developed an algorithm using artificial intelligence to assist and supervise and detect CAs. Methods: We developed an automated algorithm to detect CAs. The algorithm is based on 3D convolutional neural network modeled as a U-net. We included all saccular CAs from 2014 to 2016 from a single center. Normal and pathological datasets were prepared and annotated in 3D using an in-house developed platform. To assess the accuracy and to optimize the model, we assessed preliminary results using a validation dataset. After the algorithm was trained, a dataset was used to evaluate final IA detection and aneurysm measurements. The accuracy of the algorithm was derived using ROC curves and Pearson correlation tests. Results: We used 528 CTAs with 674 aneurysms at the following locations: ACA (3%), ACA/ACOM (26.1%), ICA/MCA (26.3%), MCA (29.4%), PCA/PCOM (2.3%), Basilar (6.6%), Vertebral (2.3%) and other (3.7%). Training datasets consisted of 189 CA scans. We plotted ROC curves and achieved an AUC of 0.85 for unruptured and 0.88 for ruptured CAs. We improved the model performance by increasing the training dataset employing various methods of data augmentation to leverage the data to its fullest. The final model tested was performed in 528 CTAs using 5-fold cross-validation and an additional set of 2400 normal CTAs. There was a significant improvement compared to the initial assessment, with an AUC of 0.93 for unruptured and 0.94 for ruptured. The algorithm detected larger aneurysms more accurately, reaching an AUC of 0.97 and a 91.5% specificity at 90% sensitivity for aneurysms larger than 7mm. Also, the algorithm accurately detected CAs in the following locations: basilar(AUC of 0.97) and MCA/ACOM (AUC of 0.94). The volume measurement (mm3) by the model compared to the annotated one achieved a Pearson correlation of 99.36. Conclusion: The Viz.ai aneurysm algorithm was able to detect and measure ruptured and unruptured CAs in consecutive CTAs. The model has demonstrated that a deep learning AI algorithm can achieve clinically useful levels of accuracy for clinical decision support.


Sign in / Sign up

Export Citation Format

Share Document