Automated classification of hepatocellular carcinoma differentiation using multiphoton microscopy and deep learning

2019 ◽  
Vol 12 (7) ◽  
Author(s):  
Hongxin Lin ◽  
Chao Wei ◽  
Guangxing Wang ◽  
Hu Chen ◽  
Lisheng Lin ◽  
...  
2018 ◽  
Vol 23 (06) ◽  
pp. 1 ◽  
Author(s):  
Mikko J. Huttunen ◽  
Abdurahman Hassan ◽  
Curtis W. McCloskey ◽  
Sijyl Fasih ◽  
Jeremy Upham ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Song-Quan Ong ◽  
Hamdan Ahmad ◽  
Gomesh Nair ◽  
Pradeep Isawasan ◽  
Abdul Hafiz Ab Majid

AbstractClassification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) by humans remains challenging. We proposed a highly accessible method to develop a deep learning (DL) model and implement the model for mosquito image classification by using hardware that could regulate the development process. In particular, we constructed a dataset with 4120 images of Aedes mosquitoes that were older than 12 days old and had common morphological features that disappeared, and we illustrated how to set up supervised deep convolutional neural networks (DCNNs) with hyperparameter adjustment. The model application was first conducted by deploying the model externally in real time on three different generations of mosquitoes, and the accuracy was compared with human expert performance. Our results showed that both the learning rate and epochs significantly affected the accuracy, and the best-performing hyperparameters achieved an accuracy of more than 98% at classifying mosquitoes, which showed no significant difference from human-level performance. We demonstrated the feasibility of the method to construct a model with the DCNN when deployed externally on mosquitoes in real time.


2006 ◽  
Vol 14 (7S_Part_19) ◽  
pp. P1067-P1068
Author(s):  
Pradeep Anand Ravindranath ◽  
Rema Raman ◽  
Tiffany W. Chow ◽  
Michael S. Rafii ◽  
Paul S. Aisen ◽  
...  

2021 ◽  
Author(s):  
Fangyao Tang ◽  
Xi Wang ◽  
An-ran Ran ◽  
Carmen KM Chan ◽  
Mary Ho ◽  
...  

<a><b>Objective:</b></a> Diabetic macular edema (DME) is the primary cause of vision loss among individuals with diabetes mellitus (DM). We developed, validated, and tested a deep-learning (DL) system for classifying DME using images from three common commercially available optical coherence tomography (OCT) devices. <p><b>Research Design and Methods:</b> We trained and validated two versions of a multi-task convolution neural network (CNN) to classify DME (center-involved DME [CI-DME], non-CI-DME, or absence of DME) using three-dimensional (3D) volume-scans and two-dimensional (2D) B-scans respectively. For both 3D and 2D CNNs, we employed the residual network (ResNet) as the backbone. For the 3D CNN, we used a 3D version of ResNet-34 with the last fully connected layer removed as the feature extraction module. A total of 73,746 OCT images were used for training and primary validation. External testing was performed using 26,981 images across seven independent datasets from Singapore, Hong Kong, the US, China, and Australia. </p> <p><b>Results:</b> In classifying the presence or absence of DME, the DL system achieved area under the receiver operating characteristic curves (AUROCs) of 0.937 (95% CI 0.920–0.954), 0.958 (0.930–0.977), and 0.965 (0.948–0.977) for primary dataset obtained from Cirrus, Spectralis, and Triton OCTs respectively, in addition to AUROCs greater than 0.906 for the external datasets. For the further classification of the CI-DME and non-CI-DME subgroups, the AUROCs were 0.968 (0.940–0.995), 0.951 (0.898–0.982), and 0.975 (0.947–0.991) for the primary dataset and greater than 0.894 for the external datasets. </p> <p><b>Conclusion:</b> We demonstrated excellent performance with a DL system for the automated classification of DME, highlighting its potential as a promising second-line screening tool for patients with DM, which may potentially create a more effective triaging mechanism to eye clinics. </p>


2020 ◽  
Author(s):  
Victor Nozais ◽  
Philippe Boutinaud ◽  
Violaine Verrecchia ◽  
Marie-Fateye Gueye ◽  
Pierre Yves Hervé ◽  
...  

Functional connectivity analyses of fMRI data have shown that the activity of the brain at rest is spatially organized into resting-state networks (RSNs). RSNs appear as groups of anatomically distant but functionally tightly connected brain regions. Inter-RSN intrinsic connectivity analyses may provide an optimal spatial level of integration to analyze the variability of the functional connectome. Here, we propose a deep learning approach to enable the automated classification of individual independent-component (IC) decompositions into a set of predefined RSNs. Two databases were used in this work, BIL&GIN and MRi-Share, with 427 and 1811 participants respectively. We trained a multi-layer perceptron (MLP) to classify each IC as one of 45 RSNs, using the IC classification of 282 participants in BIL&GIN for training and a 5-dimensional parameter grid search for hyperparameter optimization. It reached an accuracy of 92%. Predictions on the remaining individuals in BIL&GIN were tested against the original classification and demonstrated good spatial overlap between the cortical RSNs. As a first application, we created an RSN atlas based on MRi-Share. This atlas defined a brain parcellation in 29 RSNs covering 96% of the gray matter. Second, we proposed an individual-based analysis of the subdivision of the default-mode network into 4 networks. Minimal overlap between RSNs was found except in the angular gyrus and potentially in the precuneus. We thus provide the community with an individual IC classifier that can be used to analyze one dataset or to statistically compare different datasets for RSN spatial definitions.


Patterns ◽  
2021 ◽  
Vol 2 (10) ◽  
pp. 100351
Author(s):  
Nanditha Mallesh ◽  
Max Zhao ◽  
Lisa Meintker ◽  
Alexander Höllein ◽  
Franz Elsner ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document