MP09-12 AN AUTOMATED NOVEL DEEP LEARNING H&E IMAGE ANALYSIS PLATFORM OUTPERFORMS CLINICAL ONLY MODELS AND GLEASON GRADING TO PREDICT POSTOPERATIVE DISEASE RECURRENCE

2020 ◽  
Vol 203 ◽  
pp. e120
Author(s):  
Gerardo Fernandez* ◽  
Richard Scott ◽  
Abishek Sainath Madduri ◽  
Marcel Prastawa ◽  
Bahram Marami ◽  
...  
2019 ◽  
Author(s):  
Alan Bauer ◽  
Aaron George Bostrom ◽  
Joshua Ball ◽  
Christopher Applegate ◽  
Tao Cheng ◽  
...  

AbstractAerial imagery is regularly used by farmers and growers to monitor crops during the growing season. To extract meaningful phenotypic information from large-scale aerial images collected regularly from the field, high-throughput analytic solutions are required, which not only produce high-quality measures of key crop traits, but also support agricultural practitioners to make reliable management decisions of their crops. Here, we report AirSurf-Lettuce, an automated and open-source aerial image analysis platform that combines modern computer vision, up-to-date machine learning, and modular software engineering to measure yield-related phenotypes of millions of lettuces across the field. Utilising ultra-large normalized difference vegetation index (NDVI) images acquired by fixed-wing light aircrafts together with a deep-learning classifier trained with over 100,000 labelled lettuce signals, the platform is capable of scoring and categorising iceberg lettuces with high accuracy (>98%). Furthermore, novel analysis functions have been developed to map lettuce size distribution in the field, based on which global positioning system (GPS) tagged harvest regions can be derived to enable growers and farmers’ precise harvest strategies and marketability estimates before the harvest.


2020 ◽  
Author(s):  
Sana Syed ◽  
Lubaina Ehsan ◽  
Aman Shrivastava ◽  
Saurav Sengupta ◽  
Marium Khan ◽  
...  

Objectives: Striking histopathological overlap between distinct but related conditions poses a significant disease diagnostic challenge. There is a major clinical need to develop computational methods enabling clinicians to translate heterogeneous biomedical images into accurate and quantitative diagnostics. This need is particularly salient with small bowel enteropathies; Environmental Enteropathy (EE) and Celiac Disease (CD). We built upon our preliminary analysis by developing an artificial intelligence (AI)-based image analysis platform utilizing deep learning convolutional neural networks (CNNs) for these enteropathies. Methods: Data for secondary analysis was obtained from three primary studies at different sites. The image analysis platform for EE and CD was developed using convolutional neural networks (CNNs: ResNet and custom Shallow CNN). Gradient-weighted Class Activation Mappings (Grad-CAMs) were used to visualize the decision making process of the models. A team of medical experts simultaneously reviewed the stain color normalized images done for bias reduction and Grad-CAM visualizations to confirm structural preservation and biological relevance, respectively. Results: 461 high-resolution biopsy images from 150 children were acquired. Median age (interquartile range) was 37.5 (19.0 to 121.5) months with a roughly equal sex distribution; 77 males (51.3%). ResNet50 and Shallow CNN demonstrated 98% and 96% case-detection accuracy, respectively, which increased to 98.3% with an ensemble. Grad-CAMs demonstrated ability of the models to learn distinct microscopic morphological features. Conclusion: Our AI-based image analysis platform demonstrated high classification accuracy for small bowel enteropathies which was capable of identifying biologically relevant microscopic features, emulating human pathologist decision making process, performing in the case of suboptimal computational environment, and being modified for improving disease classification accuracy. Grad-CAMs that were employed illuminated the otherwise black box of deep learning in medicine, allowing for increased physician confidence in adopting these new technologies in clinical practice.


Author(s):  
Dinesh Pothineni ◽  
Martin R. Oswald ◽  
Jan Poland ◽  
Marc Pollefeys
Keyword(s):  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Uzair Khan ◽  
Sidike Paheding ◽  
Colin Elkin ◽  
Vijay Devabhaktuni

Biofouling ◽  
2021 ◽  
pp. 1-10
Author(s):  
Zhijing Wan ◽  
Ben T. MacVicar ◽  
Shea Wyatt ◽  
Diana E. Varela ◽  
Rajkumar Padmawar ◽  
...  

Author(s):  
Zhichao Liu ◽  
Luhong Jin ◽  
Jincheng Chen ◽  
Qiuyu Fang ◽  
Sergey Ablameyko ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document