scholarly journals Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Shazia Akbar ◽  
Mohammad Peikari ◽  
Sherine Salama ◽  
Azadeh Yazdan Panah ◽  
Sharon Nofech-Mozes ◽  
...  

Abstract The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists’ workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.

2019 ◽  
Author(s):  
Shazia Akbar ◽  
Mohammad Peikari ◽  
Sherine Salama ◽  
Azadeh Y. Panah ◽  
Sharon Nofech-Mozes ◽  
...  

AbstractAimsThe residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in-situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy.MethodsWe describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists’ workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone.ResultsWe show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.ConclusionsTC scoring can be successfully automated by leveraging recent advancements in artificial intelligence, thereby alleviating the burden of manual analysis.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Fuyong Xing ◽  
Yuanpu Xie ◽  
Xiaoshuang Shi ◽  
Pingjun Chen ◽  
Zizhao Zhang ◽  
...  

Abstract Background Nucleus or cell detection is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. Results We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. Conclusions We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.


2018 ◽  
Vol 15 (9) ◽  
pp. 1451-1455 ◽  
Author(s):  
Grant J. Scott ◽  
Kyle C. Hagan ◽  
Richard A. Marcum ◽  
James Alex Hurt ◽  
Derek T. Anderson ◽  
...  

2021 ◽  
Author(s):  
Jason Munger ◽  
Carlos W. Morato

This project explores how raw image data obtained from AV cameras can provide a model with more spatial information than can be learned from simple RGB images alone. This paper leverages the advances of deep neural networks to demonstrate steering angle predictions of autonomous vehicles through an end-to-end multi-channel CNN model using only the image data provided from an onboard camera. Image data is processed through existing neural networks to provide pixel segmentation and depth estimates and input to a new neural network along with the raw input image to provide enhanced feature signals from the environment. Various input combinations of Multi-Channel CNNs are evaluated, and their effectiveness is compared to single CNN networks using the individual data inputs. The model with the most accurate steering predictions is identified and performance compared to previous neural networks.


2018 ◽  
Author(s):  
Titus Josef Brinker ◽  
Achim Hekler ◽  
Christof von Kalle

BACKGROUND In recent months, multiple publications have demonstrated the use of convolutional neural networks (CNN) to classify images of skin cancer as precisely as dermatologists. These CNNs failed to outperform the International Symposium on Biomedical Imaging (ISBI) 2016 challenge in terms of average precision, however, so the technical progress represented by these studies is limited. In addition, the available reports are difficult to reproduce, due to incomplete descriptions of training procedures and the use of proprietary image databases. These factors prevent the comparison of various CNN classifiers in equal terms. OBJECTIVE To demonstrate the training of an image-classifier CNN that outperforms the winner of the ISBI 2016 challenge by using open source images exclusively. METHODS A detailed description of the training procedure is reported while the used images and test sets are disclosed fully, to insure the reproducibility of our work. RESULTS Our CNN classifier outperforms all recent attempts to classify the original ISBI 2016 challenge test data (full set of 379 test images), with an average precision of 0.709 (vs. 0.637 of the ISBI winner) and with an area under the receiver operating curve of 0.85. CONCLUSIONS This work illustrates the potential for improving skin cancer classification with enhanced training procedures for CNNs, while avoiding the use of costly equipment or proprietary image data.


Author(s):  
Xiaoyang Liu ◽  
Zhigang Zeng

AbstractThe paper presents memristor crossbar architectures for implementing layers in deep neural networks, including the fully connected layer, the convolutional layer, and the pooling layer. The crossbars achieve positive and negative weight values and approximately realize various nonlinear activation functions. Then the layers constructed by the crossbars are adopted to build the memristor-based multi-layer neural network (MMNN) and the memristor-based convolutional neural network (MCNN). Two kinds of in-situ weight update schemes, which are the fixed-voltage update and the approximately linear update, respectively, are used to train the networks. Consider variations resulted from the inherent characteristics of memristors and the errors of programming voltages, the robustness of MMNN and MCNN to these variations is analyzed. The simulation results on standard datasets show that deep neural networks (DNNs) built by the memristor crossbars work satisfactorily in pattern recognition tasks and have certain robustness to memristor variations.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 151
Author(s):  
Xintao Duan ◽  
Lei Li ◽  
Yao Su ◽  
Wenxin Wang ◽  
En Zhang ◽  
...  

Data hiding is the technique of embedding data into video or audio media. With the development of deep neural networks (DNN), the quality of images generated by novel data hiding methods based on DNN is getting better. However, there is still room for the similarity between the original images and the images generated by the DNN models which were trained based on the existing hiding frameworks to improve, and it is hard for the receiver to distinguish whether the container image is from the real sender. We propose a framework by introducing a key_img for using the over-fitting characteristic of DNN and combined with difference image grafting symmetrically, named difference image grafting deep hiding (DIGDH). The key_img can be used to identify whether the container image is from the real sender easily. The experimental results show that without changing the structures of networks, the models trained based on the proposed framework can generate images with higher similarity to original cover and secret images. According to the analysis results of the steganalysis tool named StegExpose, the container images generated by the hiding model trained based on the proposed framework is closer to the random distribution.


2021 ◽  
Author(s):  
Gregory Rutkowski ◽  
Ilgar Azizov ◽  
Evan Unmann ◽  
Marcin Dudek ◽  
Brian Arthur Grimes

As the complexity of microfluidic experiments and the associated image data volumes scale, traditional feature extraction approaches begin to struggle at both detection and analysis pipeline throughput. Deep-neural networks trained to detect certain objects are rapidly emerging as data gathering tools that can either match or outperform the analysis capabilities of the conventional methods used in microfluidic emulsion science. We demonstrate that various convolutional neural networks can be trained and used as droplet detectors in a wide variety of microfluidic systems. A generalized microfluidic droplet training and validation dataset was developed and used to tune two versions of the You Only Look Once (YOLOv3/YOLOv5) model as well as Faster R-CNN. Each model was used to detect droplets in mono- and polydisperse flow cell systems. The detection accuracy of each model shows excellent statistical symmetry with an implementation of the Hough transform as well as relevant ImageJ plugins. The models were successfully used as droplet detectors in non-microfluidic micrograph observations, where these data were not included in the training set. The models outperformed the traditional methods in more complex, porous-media simulating chip architectures with a significant speedup to per-frame analysis times. Implementing these neural networks as the primary detectors in these microfluidic systems not only makes the data pipelining more efficient, but opens the door for live detection and development of autonomous microfluidic experimental platforms. <br>


Molecules ◽  
2020 ◽  
Vol 25 (17) ◽  
pp. 3952
Author(s):  
Javed Iqbal ◽  
Martin Vogt ◽  
Jürgen Bajorath

Activity landscape (AL) models are used for visualizing and interpreting structure–activity relationships (SARs) in compound datasets. Therefore, ALs are designed to present chemical similarity and compound potency information in context. Different two- or three-dimensional (2D or 3D) AL representations have been introduced. For SAR analysis, 3D AL models are particularly intuitive. In these models, an interpolated potency surface is added as a third dimension to a 2D projection of chemical space. Accordingly, AL topology can be associated with characteristic SAR features. Going beyond visualization and a qualitative assessment of SARs, it would be very helpful to compare 3D ALs of different datasets in more quantitative terms. However, quantitative AL analysis is still in its infancy. Recently, it has been shown that 3D AL models with pre-defined topologies can be correctly classified using machine learning. Classification was facilitated on the basis of AL image feature representations learned with convolutional neural networks. Therefore, we have further investigated image analysis for quantitative comparison of 3D ALs and devised an approach to determine (dis)similarity relationships for ALs representing different compound datasets. Herein, we report this approach and demonstrate proof-of-principle. The methodology makes it possible to computationally compare 3D ALs and quantify topological differences reflecting varying SAR information content. For SAR exploration in drug design, this adds a quantitative measure of AL (dis)similarity to graphical analysis.


Sign in / Sign up

Export Citation Format

Share Document