scholarly journals MAN VS MACHINE: AN INTER-OBSERVER STUDY OF A DEEP LEARNING IVUS ANALYSIS PLATFORM AGAINST 3 OBSERVERS

2021 ◽  
Vol 77 (18) ◽  
pp. 1207
Author(s):  
David Molony ◽  
Jasmine Chan ◽  
Sameer Khawaja ◽  
Hossein Hosseini ◽  
Adam Brown ◽  
...  
2020 ◽  
Vol 203 ◽  
pp. e120
Author(s):  
Gerardo Fernandez* ◽  
Richard Scott ◽  
Abishek Sainath Madduri ◽  
Marcel Prastawa ◽  
Bahram Marami ◽  
...  

2021 ◽  
Author(s):  
Shogo Suga ◽  
Koki Nakamura ◽  
Bruno M Humbel ◽  
Hiroki Kawai ◽  
Yusuke Hirabayashi

Outer and inner mitochondrial membranes are highly specialized structures with distinct functional properties. Reconstructing complex 3D ultrastructural features of mitochondrial membranes at the nanoscale requires analysis of large volumes of serial scanning electron tomography data. While deep-learning-based methods improved in sophistication recently, time-consuming human intervention processes remain major roadblocks for efficient and accurate analysis of organelle ultrastructure. In order to overcome this limitation, we developed a deep-learning image analysis platform called Python-based Human-In-the-LOop Workflows (PHILOW). Our implementation of an iterative segmentation algorithm and Three-Axis-Prediction method not only improved segmentation speed, but also provided unprecedented ultrastructural detail of whole mitochondria and cristae. Using PHILOW, we found that 42% of cristae surface exhibits tubular structures that are not recognizable in light microscopy and 2D electron microscopy. Furthermore, we unraveled a fundamental new regulatory function for the dynamin-related GTPase Optic Atrophy 1 (OPA1) in controlling the balance between lamellar versus tubular cristae subdomains.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jae Won Choi ◽  
Yeon Jin Cho ◽  
Ji Young Ha ◽  
Seul Bi Lee ◽  
Seunghyun Lee ◽  
...  

AbstractThis study aimed to evaluate a deep learning model for generating synthetic contrast-enhanced CT (sCECT) from non-contrast chest CT (NCCT). A deep learning model was applied to generate sCECT from NCCT. We collected three separate data sets, the development set (n = 25) for model training and tuning, test set 1 (n = 25) for technical evaluation, and test set 2 (n = 12) for clinical utility evaluation. In test set 1, image similarity metrics were calculated. In test set 2, the lesion contrast-to-noise ratio of the mediastinal lymph nodes was measured, and an observer study was conducted to compare lesion conspicuity. Comparisons were performed using the paired t-test or Wilcoxon signed-rank test. In test set 1, sCECT showed a lower mean absolute error (41.72 vs 48.74; P < .001), higher peak signal-to-noise ratio (17.44 vs 15.97; P < .001), higher multiscale structural similarity index measurement (0.84 vs 0.81; P < .001), and lower learned perceptual image patch similarity metric (0.14 vs 0.15; P < .001) than NCCT. In test set 2, the contrast-to-noise ratio of the mediastinal lymph nodes was higher in the sCECT group than in the NCCT group (6.15 ± 5.18 vs 0.74 ± 0.69; P < .001). The observer study showed for all reviewers higher lesion conspicuity in NCCT with sCECT than in NCCT alone (P ≤ .001). Synthetic CECT generated from NCCT improves the depiction of mediastinal lymph nodes.


2019 ◽  
Author(s):  
Alan Bauer ◽  
Aaron George Bostrom ◽  
Joshua Ball ◽  
Christopher Applegate ◽  
Tao Cheng ◽  
...  

AbstractAerial imagery is regularly used by farmers and growers to monitor crops during the growing season. To extract meaningful phenotypic information from large-scale aerial images collected regularly from the field, high-throughput analytic solutions are required, which not only produce high-quality measures of key crop traits, but also support agricultural practitioners to make reliable management decisions of their crops. Here, we report AirSurf-Lettuce, an automated and open-source aerial image analysis platform that combines modern computer vision, up-to-date machine learning, and modular software engineering to measure yield-related phenotypes of millions of lettuces across the field. Utilising ultra-large normalized difference vegetation index (NDVI) images acquired by fixed-wing light aircrafts together with a deep-learning classifier trained with over 100,000 labelled lettuce signals, the platform is capable of scoring and categorising iceberg lettuces with high accuracy (>98%). Furthermore, novel analysis functions have been developed to map lettuce size distribution in the field, based on which global positioning system (GPS) tagged harvest regions can be derived to enable growers and farmers’ precise harvest strategies and marketability estimates before the harvest.


2020 ◽  
Author(s):  
Sana Syed ◽  
Lubaina Ehsan ◽  
Aman Shrivastava ◽  
Saurav Sengupta ◽  
Marium Khan ◽  
...  

Objectives: Striking histopathological overlap between distinct but related conditions poses a significant disease diagnostic challenge. There is a major clinical need to develop computational methods enabling clinicians to translate heterogeneous biomedical images into accurate and quantitative diagnostics. This need is particularly salient with small bowel enteropathies; Environmental Enteropathy (EE) and Celiac Disease (CD). We built upon our preliminary analysis by developing an artificial intelligence (AI)-based image analysis platform utilizing deep learning convolutional neural networks (CNNs) for these enteropathies. Methods: Data for secondary analysis was obtained from three primary studies at different sites. The image analysis platform for EE and CD was developed using convolutional neural networks (CNNs: ResNet and custom Shallow CNN). Gradient-weighted Class Activation Mappings (Grad-CAMs) were used to visualize the decision making process of the models. A team of medical experts simultaneously reviewed the stain color normalized images done for bias reduction and Grad-CAM visualizations to confirm structural preservation and biological relevance, respectively. Results: 461 high-resolution biopsy images from 150 children were acquired. Median age (interquartile range) was 37.5 (19.0 to 121.5) months with a roughly equal sex distribution; 77 males (51.3%). ResNet50 and Shallow CNN demonstrated 98% and 96% case-detection accuracy, respectively, which increased to 98.3% with an ensemble. Grad-CAMs demonstrated ability of the models to learn distinct microscopic morphological features. Conclusion: Our AI-based image analysis platform demonstrated high classification accuracy for small bowel enteropathies which was capable of identifying biologically relevant microscopic features, emulating human pathologist decision making process, performing in the case of suboptimal computational environment, and being modified for improving disease classification accuracy. Grad-CAMs that were employed illuminated the otherwise black box of deep learning in medicine, allowing for increased physician confidence in adopting these new technologies in clinical practice.


Author(s):  
Haibo Mi ◽  
Kele Xu ◽  
Yang Xiang ◽  
Yulin He ◽  
Dawei Feng ◽  
...  

Recently, deep learning has witnessed dramatic progress in the medical image analysis field. In the precise treatment of cancer immunotherapy, the quantitative analysis of PD-L1 immunohistochemistry is of great importance. It is quite common that pathologists manually quantify the cell nuclei. This process is very time-consuming and error-prone. In this paper, we describe the development of a platform for PD-L1 pathological image quantitative analysis using deep learning approaches. As point-level annotations can provide a rough estimate of the object locations and classifications, this platform adopts a point-level supervision model to classify, localize, and count the PD-L1 cells nuclei. Presently, this platform has achieved an accurate quantitative analysis of PD-L1 for two types of carcinoma, and it is deployed in one of the first-class hospitals in China.


Author(s):  
Stellan Ohlsson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document