scholarly journals A deep learning model for gastric diffuse-type adenocarcinoma classification in whole slide images

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fahdi Kanavati ◽  
Masayuki Tsuneki

AbstractGastric diffuse-type adenocarcinoma represents a disproportionately high percentage of cases of gastric cancers occurring in the young, and its relative incidence seems to be on the rise. Usually it affects the body of the stomach, and it presents shorter duration and worse prognosis compared with the differentiated (intestinal) type adenocarcinoma. The main difficulty encountered in the differential diagnosis of gastric adenocarcinomas occurs with the diffuse-type. As the cancer cells of diffuse-type adenocarcinoma are often single and inconspicuous in a background desmoplaia and inflammation, it can often be mistaken for a wide variety of non-neoplastic lesions including gastritis or reactive endothelial cells seen in granulation tissue. In this study we trained deep learning models to classify gastric diffuse-type adenocarcinoma from WSIs. We evaluated the models on five test sets obtained from distinct sources, achieving receiver operator curve (ROC) area under the curves (AUCs) in the range of 0.95–0.99. The highly promising results demonstrate the potential of AI-based computational pathology for aiding pathologists in their diagnostic workflow system.

2022 ◽  
Author(s):  
Fahdi Kanavati ◽  
Shin Ichihara ◽  
Masayuki Tsuneki

The pathological differential diagnosis between breast ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC) is of pivotal importance for determining optimum cancer treatment(s) and clinical outcomes. Since conventional diagnosis by pathologists using microscopes is limited in terms of human resources, it is necessary to develop new techniques that can rapidly and accurately diagnose large numbers of histopathological specimens. Computational pathology tools which can assist pathologists in detecting and classifying DCIS and IDC from whole slide images (WSIs) would be of great benefit for routine pathological diagnosis. In this paper, we trained deep learning models capable of classifying biopsy and surgical histopathological WSIs into DCIS, IDC, and benign. We evaluated the models on two independent test sets (n=1,382, n=548), achieving ROC areas under the curves (AUCs) up to 0.960 and 0.977 for DCIS and IDC, respectively.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 8536-8536
Author(s):  
Gouji Toyokawa ◽  
Fahdi Kanavati ◽  
Seiya Momosaki ◽  
Kengo Tateishi ◽  
Hiroaki Takeoka ◽  
...  

8536 Background: Lung cancer is the leading cause of cancer-related death in many countries, and its prognosis remains unsatisfactory. Since treatment approaches differ substantially based on the subtype, such as adenocarcinoma (ADC), squamous cell carcinoma (SCC) and small cell lung cancer (SCLC), an accurate histopathological diagnosis is of great importance. However, if the specimen is solely composed of poorly differentiated cancer cells, distinguishing between histological subtypes can be difficult. The present study developed a deep learning model to classify lung cancer subtypes from whole slide images (WSIs) of transbronchial lung biopsy (TBLB) specimens, in particular with the aim of using this model to evaluate a challenging test set of indeterminate cases. Methods: Our deep learning model consisted of two separately trained components: a convolutional neural network tile classifier and a recurrent neural network tile aggregator for the WSI diagnosis. We used a training set consisting of 638 WSIs of TBLB specimens to train a deep learning model to classify lung cancer subtypes (ADC, SCC and SCLC) and non-neoplastic lesions. The training set consisted of 593 WSIs for which the diagnosis had been determined by pathologists based on the visual inspection of Hematoxylin-Eosin (HE) slides and of 45 WSIs of indeterminate cases (64 ADCs and 19 SCCs). We then evaluated the models using five independent test sets. For each test set, we computed the receiver operator curve (ROC) area under the curve (AUC). Results: We applied the model to an indeterminate test set of WSIs obtained from TBLB specimens that pathologists had not been able to conclusively diagnose by examining the HE-stained specimens alone. Overall, the model achieved ROC AUCs of 0.993 (confidence interval [CI] 0.971-1.0) and 0.996 (0.981-1.0) for ADC and SCC, respectively. We further evaluated the model using five independent test sets consisting of both TBLB and surgically resected lung specimens (combined total of 2490 WSIs) and obtained highly promising results with ROC AUCs ranging from 0.94 to 0.99. Conclusions: In this study, we demonstrated that a deep learning model could be trained to predict lung cancer subtypes in indeterminate TBLB specimens. The extremely promising results obtained show that if deployed in clinical practice, a deep learning model that is capable of aiding pathologists in diagnosing indeterminate cases would be extremely beneficial as it would allow a diagnosis to be obtained sooner and reduce costs that would result from further investigations.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262349
Author(s):  
Esraa A. Mohamed ◽  
Essam A. Rashed ◽  
Tarek Gaber ◽  
Omar Karam

Breast cancer is one of the most common diseases among women worldwide. It is considered one of the leading causes of death among women. Therefore, early detection is necessary to save lives. Thermography imaging is an effective diagnostic technique which is used for breast cancer detection with the help of infrared technology. In this paper, we propose a fully automatic breast cancer detection system. First, U-Net network is used to automatically extract and isolate the breast area from the rest of the body which behaves as noise during the breast cancer detection model. Second, we propose a two-class deep learning model, which is trained from scratch for the classification of normal and abnormal breast tissues from thermal images. Also, it is used to extract more characteristics from the dataset that is helpful in training the network and improve the efficiency of the classification process. The proposed system is evaluated using real data (A benchmark, database (DMR-IR)) and achieved accuracy = 99.33%, sensitivity = 100% and specificity = 98.67%. The proposed system is expected to be a helpful tool for physicians in clinical use.


2017 ◽  
Vol 114 (47) ◽  
pp. 12590-12595 ◽  
Author(s):  
Maridel A. Fredericksen ◽  
Yizhe Zhang ◽  
Missy L. Hazen ◽  
Raquel G. Loreto ◽  
Colleen A. Mangold ◽  
...  

Some microbes possess the ability to adaptively manipulate host behavior. To better understand how such microbial parasites control animal behavior, we examine the cell-level interactions between the species-specific fungal parasite Ophiocordyceps unilateralis sensu lato and its carpenter ant host (Camponotus castaneus) at a crucial moment in the parasite’s lifecycle: when the manipulated host fixes itself permanently to a substrate by its mandibles. The fungus is known to secrete tissue-specific metabolites and cause changes in host gene expression as well as atrophy in the mandible muscles of its ant host, but it is unknown how the fungus coordinates these effects to manipulate its host’s behavior. In this study, we combine techniques in serial block-face scanning-electron microscopy and deep-learning–based image segmentation algorithms to visualize the distribution, abundance, and interactions of this fungus inside the body of its manipulated host. Fungal cells were found throughout the host body but not in the brain, implying that behavioral control of the animal body by this microbe occurs peripherally. Additionally, fungal cells invaded host muscle fibers and joined together to form networks that encircled the muscles. These networks may represent a collective foraging behavior of this parasite, which may in turn facilitate host manipulation.


2019 ◽  
Vol 25 (8) ◽  
pp. 1301-1309 ◽  
Author(s):  
Gabriele Campanella ◽  
Matthew G. Hanna ◽  
Luke Geneslaw ◽  
Allen Miraflor ◽  
Vitor Werneck Krauss Silva ◽  
...  

2020 ◽  
Vol 11 (3) ◽  
pp. 72-88
Author(s):  
Nassima Dif ◽  
Zakaria Elberrichi

Deep learning is one of the most commonly used techniques in computer-aided diagnosis systems. Their exploitation for histopathological image analysis is important because of the complex morphology of whole slide images. However, the main limitation of these methods is the restricted number of available medical images, which can lead to an overfitting problem. Many studies have suggested the use of static ensemble learning methods to address this issue. This article aims to propose a new dynamic ensemble deep learning method. First, it generates a set of models based on the transfer learning strategy from deep neural networks. Then, the relevant subset of models is selected by the particle swarm optimization algorithm and combined by voting or averaging methods. The proposed approach was tested on a histopathological dataset for colorectal cancer classification, based on seven types of CNNs. The method has achieved accurate results (94.52%) by the Resnet121 model and the voting strategy, which provides important insights into the efficiency of dynamic ensembling in deep learning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zohaib Iqbal ◽  
Dan Nguyen ◽  
Michael Albert Thomas ◽  
Steve Jiang

AbstractNuclear magnetic resonance spectroscopy (MRS) allows for the determination of atomic structures and concentrations of different chemicals in a biochemical sample of interest. MRS is used in vivo clinically to aid in the diagnosis of several pathologies that affect metabolic pathways in the body. Typically, this experiment produces a one dimensional (1D) 1H spectrum containing several peaks that are well associated with biochemicals, or metabolites. However, since many of these peaks overlap, distinguishing chemicals with similar atomic structures becomes much more challenging. One technique capable of overcoming this issue is the localized correlated spectroscopy (L-COSY) experiment, which acquires a second spectral dimension and spreads overlapping signal across this second dimension. Unfortunately, the acquisition of a two dimensional (2D) spectroscopy experiment is extremely time consuming. Furthermore, quantitation of a 2D spectrum is more complex. Recently, artificial intelligence has emerged in the field of medicine as a powerful force capable of diagnosing disease, aiding in treatment, and even predicting treatment outcome. In this study, we utilize deep learning to: (1) accelerate the L-COSY experiment and (2) quantify L-COSY spectra. All training and testing samples were produced using simulated metabolite spectra for chemicals found in the human body. We demonstrate that our deep learning model greatly outperforms compressed sensing based reconstruction of L-COSY spectra at higher acceleration factors. Specifically, at four-fold acceleration, our method has less than 5% normalized mean squared error, whereas compressed sensing yields 20% normalized mean squared error. We also show that at low SNR (25% noise compared to maximum signal), our deep learning model has less than 8% normalized mean squared error for quantitation of L-COSY spectra. These pilot simulation results appear promising and may help improve the efficiency and accuracy of L-COSY experiments in the future.


2021 ◽  
Author(s):  
Wen-Yu Chuang ◽  
Chi-Chung Chen ◽  
Wei-Hsiang Yu ◽  
Chi-Ju Yeh ◽  
Shang-Hung Chang ◽  
...  

AbstractDetection of nodal micrometastasis (tumor size: 0.2–2.0 mm) is challenging for pathologists due to the small size of metastatic foci. Since lymph nodes with micrometastasis are counted as positive nodes, detecting micrometastasis is crucial for accurate pathologic staging of colorectal cancer. Previously, deep learning algorithms developed with manually annotated images performed well in identifying micrometastasis of breast cancer in sentinel lymph nodes. However, the process of manual annotation is labor intensive and time consuming. Multiple instance learning was later used to identify metastatic breast cancer without manual annotation, but its performance appears worse in detecting micrometastasis. Here, we developed a deep learning model using whole-slide images of regional lymph nodes of colorectal cancer with only a slide-level label (either a positive or negative slide). The training, validation, and testing sets included 1963, 219, and 1000 slides, respectively. A supercomputer TAIWANIA 2 was used to train a deep learning model to identify metastasis. At slide level, our algorithm performed well in identifying both macrometastasis (tumor size > 2.0 mm) and micrometastasis with an area under the receiver operating characteristics curve (AUC) of 0.9993 and 0.9956, respectively. Since most of our slides had more than one lymph node, we then tested the performance of our algorithm on 538 single-lymph node images randomly cropped from the testing set. At single-lymph node level, our algorithm maintained good performance in identifying macrometastasis and micrometastasis with an AUC of 0.9944 and 0.9476, respectively. Visualization using class activation mapping confirmed that our model identified nodal metastasis based on areas of tumor cells. Our results demonstrate for the first time that micrometastasis could be detected by deep learning on whole-slide images without manual annotation.


Mekatronika ◽  
2020 ◽  
Vol 2 (1) ◽  
pp. 68-72
Author(s):  
Abdulaziz Abdo Salman ◽  
Ismail Mohd Khairuddin ◽  
Anwar P.P. Abdul Majeed ◽  
Mohd Azraai Mohd Razman

Diabetes is a global disease that occurs when the body is disabled pancreas to secrete insulin to convert the sugar to power in the blood. As a result, some tiny blood vessels on the part of the body, such as the eyes, are affected by high sugar and cause blocking blood flow in the vessels, which is called diabetic retinopathy.  This disease may lead to permanent blindness due to the growth of new vessels in the back of the retina causing it to detach from the eyes. In 2016, 387 million people were diagnosed with Diabetic retinopathy, and the number is growing yearly, and the old detection approach becomes worse. Therefore, the purpose of this paper is to computerize the old method of detecting different classes of DR from 0-4 according to severity by given fundus images. The method is to construct a fine-tuned deep learning model based on transfer learning with dense layers. The used models here are InceptionV3, VGG16, and ResNet50 with a sharpening filter. Subsequently, InceptionV3 has achieved 94% as the highest accuracy among other models.  


2021 ◽  
pp. 019262332110571
Author(s):  
Ji-Hee Hwang ◽  
Hyun-Ji Kim ◽  
Heejin Park ◽  
Byoung-Seok Lee ◽  
Hwa-Young Son ◽  
...  

Exponential development in artificial intelligence or deep learning technology has resulted in more trials to systematically determine the pathological diagnoses using whole slide images (WSIs) in clinical and nonclinical studies. In this study, we applied Mask Regions with Convolution Neural Network (Mask R-CNN), a deep learning model that uses instance segmentation, to detect hepatic fibrosis induced by N-nitrosodimethylamine (NDMA) in Sprague-Dawley rats. From 51 WSIs, we collected 2011 cropped images with hepatic fibrosis annotations. Training and detection of hepatic fibrosis via artificial intelligence methods was performed using Tensorflow 2.1.0, powered by an NVIDIA 2080 Ti GPU. From the test process using tile images, 95% of model accuracy was verified. In addition, we validated the model to determine whether the predictions by the trained model can reflect the scoring system by the pathologists at the WSI level. The validation was conducted by comparing the model predictions in 18 WSIs at 20× and 10× magnifications with ground truth annotations and board-certified pathologists. Predictions at 20× showed a high correlation with ground truth ( R 2 = 0.9660) and a good correlation with the average fibrosis rank by pathologists ( R 2 = 0.8887). Therefore, the Mask R-CNN algorithm is a useful tool for detecting and quantifying pathological findings in nonclinical studies.


Sign in / Sign up

Export Citation Format

Share Document