scholarly journals Classification of glomerular pathological findings using deep learning and nephrologist–AI collective intelligence approach

Author(s):  
Eiichiro Uchino ◽  
Kanata Suzuki ◽  
Noriaki Sato ◽  
Ryosuke Kojima ◽  
Yoshinori Tamada ◽  
...  

AbstractBackgroundAutomated classification of glomerular pathological findings is potentially beneficial in establishing an efficient and objective diagnosis in renal pathology. While previous studies have verified the artificial intelligence (AI) models for the classification of global sclerosis and glomerular cell proliferation, there are several other glomerular pathological findings required for diagnosis, and the comprehensive models for the classification of these major findings have not yet been reported. Whether the cooperation between these AI models and clinicians improves diagnostic performance also remains unknown. Here, we developed AI models to classify glomerular images for major findings required for pathological diagnosis and investigated whether those models could improve the diagnostic performance of nephrologists.MethodsWe used a dataset of 283 kidney biopsy cases comprising 15888 glomerular images that were annotated by a total of 25 nephrologists. AI models to classify seven pathological findings: global sclerosis, segmental sclerosis, endocapillary proliferation, mesangial matrix accumulation, mesangial cell proliferation, crescent, and basement membrane structural changes, were constructed using deep learning by fine-tuning of InceptionV3 convolutional neural network. Subsequently, we compared the agreement to truth labels between majority decision among nephrologists with or without the AI model as a voter.ResultsOur model for global sclerosis showed high performance (area under the curve: periodic acid-Schiff, 0.986; periodic acid methenamine silver, 0.983); the models for the other findings also showed performance close to those of nephrologists. By adding the AI model output to majority decision among nephrologists, the sensitivity and specificity were significantly improved in 9 of 14 constructed models compared to those of nephrologists alone.ConclusionOur study showed a proof-of-concept for the classification of multiple glomerular findings in a comprehensive method of deep learning and suggested its potential effectiveness in improving diagnostic accuracy of clinicians.

Healthcare ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1579
Author(s):  
Wansuk Choi ◽  
Seoyoon Heo

The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.


2020 ◽  
Vol 141 ◽  
pp. 104231 ◽  
Author(s):  
Eiichiro Uchino ◽  
Kanata Suzuki ◽  
Noriaki Sato ◽  
Ryosuke Kojima ◽  
Yoshinori Tamada ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Qiang Lin ◽  
Tongtong Li ◽  
Chuangui Cao ◽  
Yongchun Cao ◽  
Zhengxing Man ◽  
...  

AbstractSPECT nuclear medicine imaging is widely used for treating, diagnosing, evaluating and preventing various serious diseases. The automated classification of medical images is becoming increasingly important in developing computer-aided diagnosis systems. Deep learning, particularly for the convolutional neural networks, has been widely applied to the classification of medical images. In order to reliably classify SPECT bone images for the automated diagnosis of metastasis on which the SPECT imaging solely focuses, in this paper, we present several deep classifiers based on the deep networks. Specifically, original SPECT images are cropped to extract the thoracic region, followed by a geometric transformation that contributes to augment the original data. We then construct deep classifiers based on the widely used deep networks including VGG, ResNet and DenseNet by fine-tuning their parameters and structures or self-defining new network structures. Experiments on a set of real-world SPECT bone images show that the proposed classifiers perform well in identifying bone metastasis with SPECT imaging. It achieves 0.9807, 0.9900, 0.9830, 0.9890, 0.9802 and 0.9933 for accuracy, precision, recall, specificity, F-1 score and AUC, respectively, on the test samples from the augmented dataset without normalization.


Author(s):  
Kasikrit Damkliang ◽  
Thakerng Wongsirichot ◽  
Paramee Thongsuksai

Since the introduction of image pattern recognition and computer vision processing, the classification of cancer tissues has been a challenge at pixel-level, slide-level, and patient-level. Conventional machine learning techniques have given way to Deep Learning (DL), a contemporary, state-of-the-art approach to texture classification and localization of cancer tissues. Colorectal Cancer (CRC) is the third ranked cause of death from cancer worldwide. This paper proposes image-level texture classification of a CRC dataset by deep convolutional neural networks (CNN). Simple DL techniques consisting of transfer learning and fine-tuning were exploited. VGG-16, a Keras pre-trained model with initial weights by ImageNet, was applied. The transfer learning architecture and methods responding to VGG-16 are proposed. The training, validation, and testing sets included 5000 images of 150 × 150 pixels. The application set for detection and localization contained 10 large original images of 5000 × 5000 pixels. The model achieved F1-score and accuracy of 0.96 and 0.99, respectively, and produced a false positive rate of 0.01. AUC-based evaluation was also measured. The model classified ten large previously unseen images from the application set represented in false color maps. The reported results show the satisfactory performance of the model. The simplicity of the architecture, configuration, and implementation also contributes to the outcome this work.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Ji Young Lee ◽  
Jong Soo Kim ◽  
Tae Yoon Kim ◽  
Young Soo Kim

AbstractA novel deep-learning algorithm for artificial neural networks (ANNs), completely different from the back-propagation method, was developed in a previous study. The purpose of this study was to assess the feasibility of using the algorithm for the detection of intracranial haemorrhage (ICH) and the classification of its subtypes, without employing the convolutional neural network (CNN). For the detection of ICH with the summation of all the computed tomography (CT) images for each case, the area under the ROC curve (AUC) was 0.859, and the sensitivity and the specificity were 78.0% and 80.0%, respectively. Regarding ICH localisation, CT images were divided into 10 subdivisions based on the intracranial height. With the subdivision of 41–50%, the best diagnostic performance for detecting ICH was obtained with AUC of 0.903, the sensitivity of 82.5%, and the specificity of 84.1%. For the classification of the ICH to subtypes, the accuracy rate for subarachnoid haemorrhage (SAH) was considerably excellent at 91.7%. This study revealed that our approach can greatly reduce the ICH diagnosis time in an actual emergency situation with a fairly good diagnostic performance.


Author(s):  
Vinit Kumar Gunjan ◽  
Rashmi Pathak ◽  
Omveer Singh

This article describes how to establish the neural network technique for various image groupings in a convolution neural network (CNN) training. In addition, it also suggests initial classification results using CNN learning characteristics and classification of images from different categories. To determine the correct architecture, we explore a transfer learning technique, called Fine-Tuning of Deep Learning Technology, a dataset used to provide solutions for individually classified image-classes.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 512
Author(s):  
Xiwei Huang ◽  
Jixuan Liu ◽  
Jiangfan Yao ◽  
Maoyu Wei ◽  
Wentao Han ◽  
...  

The differential count of white blood cells (WBCs) is one widely used approach to assess the status of a patient’s immune system. Currently, the main methods of differential WBC counting are manual counting and automatic instrument analysis with labeling preprocessing. But these two methods are complicated to operate and may interfere with the physiological states of cells. Therefore, we propose a deep learning-based method to perform label-free classification of three types of WBCs based on their morphologies to judge the activated or inactivated neutrophils. Over 90% accuracy was finally achieved by a pre-trained fine-tuning Resnet-50 network. This deep learning-based method for label-free WBC classification can tackle the problem of complex instrumental operation and interference of fluorescent labeling to the physiological states of the cells, which is promising for future point-of-care applications.


2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Tulika Kakati ◽  
Dhruba K. Bhattacharyya ◽  
Jugal K. Kalita ◽  
Trina M. Norden-Krichmar

Abstract Background A limitation of traditional differential expression analysis on small datasets involves the possibility of false positives and false negatives due to sample variation. Considering the recent advances in deep learning (DL) based models, we wanted to expand the state-of-the-art in disease biomarker prediction from RNA-seq data using DL. However, application of DL to RNA-seq data is challenging due to absence of appropriate labels and smaller sample size as compared to number of genes. Deep learning coupled with transfer learning can improve prediction performance on novel data by incorporating patterns learned from other related data. With the emergence of new disease datasets, biomarker prediction would be facilitated by having a generalized model that can transfer the knowledge of trained feature maps to the new dataset. To the best of our knowledge, there is no Convolutional Neural Network (CNN)-based model coupled with transfer learning to predict the significant upregulating (UR) and downregulating (DR) genes from both trained and untrained datasets. Results We implemented a CNN model, DEGnext, to predict UR and DR genes from gene expression data obtained from The Cancer Genome Atlas database. DEGnext uses biologically validated data along with logarithmic fold change values to classify differentially expressed genes (DEGs) as UR and DR genes. We applied transfer learning to our model to leverage the knowledge of trained feature maps to untrained cancer datasets. DEGnext’s results were competitive (ROC scores between 88 and 99$$\%$$ % ) with those of five traditional machine learning methods: Decision Tree, K-Nearest Neighbors, Random Forest, Support Vector Machine, and XGBoost. DEGnext was robust and effective in terms of transferring learned feature maps to facilitate classification of unseen datasets. Additionally, we validated that the predicted DEGs from DEGnext were mapped to significant Gene Ontology terms and pathways related to cancer. Conclusions DEGnext can classify DEGs into UR and DR genes from RNA-seq cancer datasets with high performance. This type of analysis, using biologically relevant fine-tuning data, may aid in the exploration of potential biomarkers and can be adapted for other disease datasets.


Sign in / Sign up

Export Citation Format

Share Document