Deep learning based multi-organ segmentation and metastases segmentation in whole mouse body and the cryo-imaging cancer imaging and therapy analysis platform (CITAP)

Author(s):  
Yiqiao Liu ◽  
Madhu Gargesha ◽  
Mohammed Qutaish ◽  
Zhuxian Zhou ◽  
Bryan Scott ◽  
...  
2020 ◽  
Vol 189 ◽  
pp. 105316 ◽  
Author(s):  
Rogier R. Wildeboer ◽  
Ruud J.G. van Sloun ◽  
Hessel Wijkstra ◽  
Massimo Mischi

2020 ◽  
Vol 203 ◽  
pp. e120
Author(s):  
Gerardo Fernandez* ◽  
Richard Scott ◽  
Abishek Sainath Madduri ◽  
Marcel Prastawa ◽  
Bahram Marami ◽  
...  

2021 ◽  
Author(s):  
Shogo Suga ◽  
Koki Nakamura ◽  
Bruno M Humbel ◽  
Hiroki Kawai ◽  
Yusuke Hirabayashi

Outer and inner mitochondrial membranes are highly specialized structures with distinct functional properties. Reconstructing complex 3D ultrastructural features of mitochondrial membranes at the nanoscale requires analysis of large volumes of serial scanning electron tomography data. While deep-learning-based methods improved in sophistication recently, time-consuming human intervention processes remain major roadblocks for efficient and accurate analysis of organelle ultrastructure. In order to overcome this limitation, we developed a deep-learning image analysis platform called Python-based Human-In-the-LOop Workflows (PHILOW). Our implementation of an iterative segmentation algorithm and Three-Axis-Prediction method not only improved segmentation speed, but also provided unprecedented ultrastructural detail of whole mitochondria and cristae. Using PHILOW, we found that 42% of cristae surface exhibits tubular structures that are not recognizable in light microscopy and 2D electron microscopy. Furthermore, we unraveled a fundamental new regulatory function for the dynamin-related GTPase Optic Atrophy 1 (OPA1) in controlling the balance between lamellar versus tubular cristae subdomains.


2019 ◽  
Author(s):  
Alan Bauer ◽  
Aaron George Bostrom ◽  
Joshua Ball ◽  
Christopher Applegate ◽  
Tao Cheng ◽  
...  

AbstractAerial imagery is regularly used by farmers and growers to monitor crops during the growing season. To extract meaningful phenotypic information from large-scale aerial images collected regularly from the field, high-throughput analytic solutions are required, which not only produce high-quality measures of key crop traits, but also support agricultural practitioners to make reliable management decisions of their crops. Here, we report AirSurf-Lettuce, an automated and open-source aerial image analysis platform that combines modern computer vision, up-to-date machine learning, and modular software engineering to measure yield-related phenotypes of millions of lettuces across the field. Utilising ultra-large normalized difference vegetation index (NDVI) images acquired by fixed-wing light aircrafts together with a deep-learning classifier trained with over 100,000 labelled lettuce signals, the platform is capable of scoring and categorising iceberg lettuces with high accuracy (>98%). Furthermore, novel analysis functions have been developed to map lettuce size distribution in the field, based on which global positioning system (GPS) tagged harvest regions can be derived to enable growers and farmers’ precise harvest strategies and marketability estimates before the harvest.


2021 ◽  
Vol 77 (18) ◽  
pp. 1207
Author(s):  
David Molony ◽  
Jasmine Chan ◽  
Sameer Khawaja ◽  
Hossein Hosseini ◽  
Adam Brown ◽  
...  

2020 ◽  
Author(s):  
Sana Syed ◽  
Lubaina Ehsan ◽  
Aman Shrivastava ◽  
Saurav Sengupta ◽  
Marium Khan ◽  
...  

Objectives: Striking histopathological overlap between distinct but related conditions poses a significant disease diagnostic challenge. There is a major clinical need to develop computational methods enabling clinicians to translate heterogeneous biomedical images into accurate and quantitative diagnostics. This need is particularly salient with small bowel enteropathies; Environmental Enteropathy (EE) and Celiac Disease (CD). We built upon our preliminary analysis by developing an artificial intelligence (AI)-based image analysis platform utilizing deep learning convolutional neural networks (CNNs) for these enteropathies. Methods: Data for secondary analysis was obtained from three primary studies at different sites. The image analysis platform for EE and CD was developed using convolutional neural networks (CNNs: ResNet and custom Shallow CNN). Gradient-weighted Class Activation Mappings (Grad-CAMs) were used to visualize the decision making process of the models. A team of medical experts simultaneously reviewed the stain color normalized images done for bias reduction and Grad-CAM visualizations to confirm structural preservation and biological relevance, respectively. Results: 461 high-resolution biopsy images from 150 children were acquired. Median age (interquartile range) was 37.5 (19.0 to 121.5) months with a roughly equal sex distribution; 77 males (51.3%). ResNet50 and Shallow CNN demonstrated 98% and 96% case-detection accuracy, respectively, which increased to 98.3% with an ensemble. Grad-CAMs demonstrated ability of the models to learn distinct microscopic morphological features. Conclusion: Our AI-based image analysis platform demonstrated high classification accuracy for small bowel enteropathies which was capable of identifying biologically relevant microscopic features, emulating human pathologist decision making process, performing in the case of suboptimal computational environment, and being modified for improving disease classification accuracy. Grad-CAMs that were employed illuminated the otherwise black box of deep learning in medicine, allowing for increased physician confidence in adopting these new technologies in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document