Implementation and Practice of Deep Learning-Based Instance Segmentation Algorithm for Quantification of Hepatic Fibrosis at Whole Slide Level in Sprague-Dawley Rats

2021 ◽  
pp. 019262332110571
Author(s):  
Ji-Hee Hwang ◽  
Hyun-Ji Kim ◽  
Heejin Park ◽  
Byoung-Seok Lee ◽  
Hwa-Young Son ◽  
...  

Exponential development in artificial intelligence or deep learning technology has resulted in more trials to systematically determine the pathological diagnoses using whole slide images (WSIs) in clinical and nonclinical studies. In this study, we applied Mask Regions with Convolution Neural Network (Mask R-CNN), a deep learning model that uses instance segmentation, to detect hepatic fibrosis induced by N-nitrosodimethylamine (NDMA) in Sprague-Dawley rats. From 51 WSIs, we collected 2011 cropped images with hepatic fibrosis annotations. Training and detection of hepatic fibrosis via artificial intelligence methods was performed using Tensorflow 2.1.0, powered by an NVIDIA 2080 Ti GPU. From the test process using tile images, 95% of model accuracy was verified. In addition, we validated the model to determine whether the predictions by the trained model can reflect the scoring system by the pathologists at the WSI level. The validation was conducted by comparing the model predictions in 18 WSIs at 20× and 10× magnifications with ground truth annotations and board-certified pathologists. Predictions at 20× showed a high correlation with ground truth ( R 2 = 0.9660) and a good correlation with the average fibrosis rank by pathologists ( R 2 = 0.8887). Therefore, the Mask R-CNN algorithm is a useful tool for detecting and quantifying pathological findings in nonclinical studies.

2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii359-iii359
Author(s):  
Lydia Tam ◽  
Edward Lee ◽  
Michelle Han ◽  
Jason Wright ◽  
Leo Chen ◽  
...  

Abstract BACKGROUND Brain tumors are the most common solid malignancies in childhood, many of which develop in the posterior fossa (PF). Manual tumor measurements are frequently required to optimize registration into surgical navigation systems or for surveillance of nonresectable tumors after therapy. With recent advances in artificial intelligence (AI), automated MRI-based tumor segmentation is now feasible without requiring manual measurements. Our goal was to create a deep learning model for automated PF tumor segmentation that can register into navigation systems and provide volume output. METHODS 720 pre-surgical MRI scans from five pediatric centers were divided into training, validation, and testing datasets. The study cohort comprised of four PF tumor types: medulloblastoma, diffuse midline glioma, ependymoma, and brainstem or cerebellar pilocytic astrocytoma. Manual segmentation of the tumors by an attending neuroradiologist served as “ground truth” labels for model training and evaluation. We used 2D Unet, an encoder-decoder convolutional neural network architecture, with a pre-trained ResNet50 encoder. We assessed ventricle segmentation accuracy on a held-out test set using Dice similarity coefficient (0–1) and compared ventricular volume calculation between manual and model-derived segmentations using linear regression. RESULTS Compared to the ground truth expert human segmentation, overall Dice score for model performance accuracy was 0.83 for automatic delineation of the 4 tumor types. CONCLUSIONS In this multi-institutional study, we present a deep learning algorithm that automatically delineates PF tumors and outputs volumetric information. Our results demonstrate applied AI that is clinically applicable, potentially augmenting radiologists, neuro-oncologists, and neurosurgeons for tumor evaluation, surveillance, and surgical planning.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 8536-8536
Author(s):  
Gouji Toyokawa ◽  
Fahdi Kanavati ◽  
Seiya Momosaki ◽  
Kengo Tateishi ◽  
Hiroaki Takeoka ◽  
...  

8536 Background: Lung cancer is the leading cause of cancer-related death in many countries, and its prognosis remains unsatisfactory. Since treatment approaches differ substantially based on the subtype, such as adenocarcinoma (ADC), squamous cell carcinoma (SCC) and small cell lung cancer (SCLC), an accurate histopathological diagnosis is of great importance. However, if the specimen is solely composed of poorly differentiated cancer cells, distinguishing between histological subtypes can be difficult. The present study developed a deep learning model to classify lung cancer subtypes from whole slide images (WSIs) of transbronchial lung biopsy (TBLB) specimens, in particular with the aim of using this model to evaluate a challenging test set of indeterminate cases. Methods: Our deep learning model consisted of two separately trained components: a convolutional neural network tile classifier and a recurrent neural network tile aggregator for the WSI diagnosis. We used a training set consisting of 638 WSIs of TBLB specimens to train a deep learning model to classify lung cancer subtypes (ADC, SCC and SCLC) and non-neoplastic lesions. The training set consisted of 593 WSIs for which the diagnosis had been determined by pathologists based on the visual inspection of Hematoxylin-Eosin (HE) slides and of 45 WSIs of indeterminate cases (64 ADCs and 19 SCCs). We then evaluated the models using five independent test sets. For each test set, we computed the receiver operator curve (ROC) area under the curve (AUC). Results: We applied the model to an indeterminate test set of WSIs obtained from TBLB specimens that pathologists had not been able to conclusively diagnose by examining the HE-stained specimens alone. Overall, the model achieved ROC AUCs of 0.993 (confidence interval [CI] 0.971-1.0) and 0.996 (0.981-1.0) for ADC and SCC, respectively. We further evaluated the model using five independent test sets consisting of both TBLB and surgically resected lung specimens (combined total of 2490 WSIs) and obtained highly promising results with ROC AUCs ranging from 0.94 to 0.99. Conclusions: In this study, we demonstrated that a deep learning model could be trained to predict lung cancer subtypes in indeterminate TBLB specimens. The extremely promising results obtained show that if deployed in clinical practice, a deep learning model that is capable of aiding pathologists in diagnosing indeterminate cases would be extremely beneficial as it would allow a diagnosis to be obtained sooner and reduce costs that would result from further investigations.


2017 ◽  
Vol 107 ◽  
pp. 98-99 ◽  
Author(s):  
Jing Zhang ◽  
Yanlin Song ◽  
Fan Xia ◽  
Chenjing Zhu ◽  
Yingying Zhang ◽  
...  

2019 ◽  
pp. 129-141 ◽  
Author(s):  
Hui Xian Chia

This article examines the use of artificial intelligence (AI) and deep learning, specifically, to create financial robo-advisers. These machines have the potential to be perfectly honest fiduciaries, acting in their client’s best interests without conflicting self-interest or greed, unlike their human counterparts. However, the application of AI technology to create financial robo-advisers is not without risk. This article will focus on the unique risks posed by deep learning technology. One of the main fears regarding deep learning is that it is a “black box”, its decision-making process is opaque and not open to scrutiny even by the people who developed it. This poses a significant challenge to financial regulators, whom would not be able to examine the underlying rationale and rules of the robo-adviser to determine its safety for public use. The rise of deep learning has been met with calls for ‘explainability’ of how deep learning agents make their decisions. This paper argues that greater explainability can be achieved by describing the ‘personality’ of deep learning robo-advisers, and further proposes a framework for describing the parameters of the deep learning model using concepts that can be readily understood by people without technical expertise. This regards whether the robo-adviser is ‘greedy’, ‘selfish’ or ‘prudent’. Greater understanding will enable regulators and consumers to better judge the safety and suitability of deep learning financial robo-advisers.


2021 ◽  
Author(s):  
Yew Kee Wong

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Yannan Yu ◽  
Soren Christensen ◽  
Yuan Xie ◽  
Enhao Gong ◽  
Maarten G Lansberg ◽  
...  

Objective: Ischemic core prediction from CT perfusion (CTP) remains inaccurate compared with gold standard diffusion-weighted imaging (DWI). We evaluated if a deep learning model to predict the DWI lesion from MR perfusion (MRP) could facilitate ischemic core prediction on CTP. Method: Using the multi-center CRISP cohort of acute ischemic stroke patient with CTP before thrombectomy, we included patients with major reperfusion (TICI score≥2b), adequate image quality, and follow-up MRI at 3-7 days. Perfusion parameters including Tmax, mean transient time, cerebral blood flow (CBF), and cerebral blood volume were reconstructed by RAPID software. Core lab experts outlined the stroke lesion on the follow-up MRI. A previously trained MRI model in a separate group of patients was used as a starting point, which used MRP parameters as input and RAPID ischemic core on DWI as ground truth. We fine-tuned this model, using CTP parameters as input, and follow-up MRI as ground truth. Another model was also trained from scratch with only CTP data. 5-fold cross validation was used. Performance of the models was compared with ischemic core (rCBF≤30%) from RAPID software to identify the presence of a large infarct (volume>70 or >100ml). Results: 94 patients in the CRISP trial met the inclusion criteria (mean age 67±15 years, 52% male, median baseline NIHSS 18, median 90-day mRS 2). Without fine-tuning, the MRI model had an agreement of 73% in infarct >70ml, and 69% in >100ml; the MRI model fine-tuned on CT improved the agreement to 77% and 73%; The CT model trained from scratch had agreements of 73% and 71%; All of the deep learning models outperformed the rCBF segmentation from RAPID, which had agreements of 51% and 64%. See Table and figure. Conclusions: It is feasible to apply MRP-based deep learning model to CT. Fine-tuning with CTP data further improves the predictions. All deep learning models predict the stroke lesion after major recanalization better than thresholding approaches based on rCBF.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Sebastian Klein ◽  
Jacob Gildenblat ◽  
Michaele Angelika Ihle ◽  
Sabine Merkelbach-Bruse ◽  
Ka-Won Noh ◽  
...  

Abstract Background Helicobacter pylori, a 2 × 1 μm spiral-shaped bacterium, is the most common risk factor for gastric cancer worldwide. Clinically, patients presenting with symptoms of gastritis, routinely undergo gastric biopsies. The following histo-morphological evaluation dictates therapeutic decisions, where antibiotics are used for H. pylori eradication. There is a strong rational to accelerate the detection process of H. pylori on histological specimens, using novel technologies, such as deep learning. Methods We designed a deep-learning-based decision support algorithm that can be applied on regular whole slide images of gastric biopsies. In detail, we can detect H. pylori both on Giemsa- and regular H&E stained whole slide images. Results With the help of our decision support algorithm, we show an increased sensitivity in a subset of 87 cases that underwent additional PCR- and immunohistochemical testing to define a sensitive ground truth of HP presence. For Giemsa stained sections, the decision support algorithm achieved a sensitivity of 100% compared to 68.4% (microscopic diagnosis), with a tolerable specificity of 66.2% for the decision support algorithm compared to 92.6 (microscopic diagnosis). Conclusion Together, we provide the first evidence of a decision support algorithm proving as a sensitive screening option for H. pylori that can potentially aid pathologists to accurately diagnose H. pylori presence on gastric biopsies.


2020 ◽  
Vol 39 (10) ◽  
pp. 734-741
Author(s):  
Sébastien Guillon ◽  
Frédéric Joncour ◽  
Pierre-Emmanuel Barrallon ◽  
Laurent Castanié

We propose new metrics to measure the performance of a deep learning model applied to seismic interpretation tasks such as fault and horizon extraction. Faults and horizons are thin geologic boundaries (1 pixel thick on the image) for which a small prediction error could lead to inappropriately large variations in common metrics (precision, recall, and intersection over union). Through two examples, we show how classical metrics could fail to indicate the true quality of fault or horizon extraction. Measuring the accuracy of reconstruction of thin objects or boundaries requires introducing a tolerance distance between ground truth and prediction images to manage the uncertainties inherent in their delineation. We therefore adapt our metrics by introducing a tolerance function and illustrate their ability to manage uncertainties in seismic interpretation. We compare classical and new metrics through different examples and demonstrate the robustness of our metrics. Finally, we show on a 3D West African data set how our metrics are used to tune an optimal deep learning model.


2021 ◽  
pp. 26-34
Author(s):  
Yuqian Li ◽  
Weiguo Xu

AbstractArchitects usually design ideation and conception by hand-sketching. Sketching is a direct expression of the architect’s creativity. But 2D sketches are often vague, intentional and even ambiguous. In the research of sketch-based modeling, it is the most difficult part to make the computer to recognize the sketches. Because of the development of artificial intelligence, especially deep learning technology, Convolutional Neural Networks (CNNs) have shown obvious advantages in the field of extracting features and matching, and Generative Adversarial Neural Networks (GANs) have made great breakthroughs in the field of architectural generation which make the image-to-image translation become more and more popular. As the building images are gradually developed from the original sketches, in this research, we try to develop a system from the sketches to the images of buildings using CycleGAN algorithm. The experiment demonstrates that this method could achieve the mapping process from the sketches to images, and the results show that the sketches’ features could be recognised in the process. By the learning and training process of the sketches’ reconstruction, the features of the images are also mapped to the sketches, which strengthen the architectural relationship in the sketch, so that the original sketch can gradually approach the building images, and then it is possible to achieve the sketch-based modeling technology.


Sign in / Sign up

Export Citation Format

Share Document