scholarly journals StainNet: A Fast and Robust Stain Normalization Network

2021 ◽  
Vol 8 ◽  
Author(s):  
Hongtao Kang ◽  
Die Luo ◽  
Weihua Feng ◽  
Shaoqun Zeng ◽  
Tingwei Quan ◽  
...  

Stain normalization often refers to transferring the color distribution to the target image and has been widely used in biomedical image analysis. The conventional stain normalization usually achieves through a pixel-by-pixel color mapping model, which depends on one reference image, and it is hard to achieve accurately the style transformation between image datasets. In principle, this difficulty can be well-solved by deep learning-based methods, whereas, its complicated structure results in low computational efficiency and artifacts in the style transformation, which has restricted the practical application. Here, we use distillation learning to reduce the complexity of deep learning methods and a fast and robust network called StainNet to learn the color mapping between the source image and the target image. StainNet can learn the color mapping relationship from a whole dataset and adjust the color value in a pixel-to-pixel manner. The pixel-to-pixel manner restricts the network size and avoids artifacts in the style transformation. The results on the cytopathology and histopathology datasets show that StainNet can achieve comparable performance to the deep learning-based methods. Computation results demonstrate StainNet is more than 40 times faster than StainGAN and can normalize a 100,000 × 100,000 whole slide image in 40 s.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shan Guleria ◽  
Tilak U. Shah ◽  
J. Vincent Pulido ◽  
Matthew Fasullo ◽  
Lubaina Ehsan ◽  
...  

AbstractProbe-based confocal laser endomicroscopy (pCLE) allows for real-time diagnosis of dysplasia and cancer in Barrett’s esophagus (BE) but is limited by low sensitivity. Even the gold standard of histopathology is hindered by poor agreement between pathologists. We deployed deep-learning-based image and video analysis in order to improve diagnostic accuracy of pCLE videos and biopsy images. Blinded experts categorized biopsies and pCLE videos as squamous, non-dysplastic BE, or dysplasia/cancer, and deep learning models were trained to classify the data into these three categories. Biopsy classification was conducted using two distinct approaches—a patch-level model and a whole-slide-image-level model. Gradient-weighted class activation maps (Grad-CAMs) were extracted from pCLE and biopsy models in order to determine tissue structures deemed relevant by the models. 1970 pCLE videos, 897,931 biopsy patches, and 387 whole-slide images were used to train, test, and validate the models. In pCLE analysis, models achieved a high sensitivity for dysplasia (71%) and an overall accuracy of 90% for all classes. For biopsies at the patch level, the model achieved a sensitivity of 72% for dysplasia and an overall accuracy of 90%. The whole-slide-image-level model achieved a sensitivity of 90% for dysplasia and 94% overall accuracy. Grad-CAMs for all models showed activation in medically relevant tissue regions. Our deep learning models achieved high diagnostic accuracy for both pCLE-based and histopathologic diagnosis of esophageal dysplasia and its precursors, similar to human accuracy in prior studies. These machine learning approaches may improve accuracy and efficiency of current screening protocols.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinyang Li ◽  
Guoxun Zhang ◽  
Hui Qiao ◽  
Feng Bao ◽  
Yue Deng ◽  
...  

AbstractThe development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Changyong Li ◽  
Yongxian Fan ◽  
Xiaodong Cai

Abstract Background With the development of deep learning (DL), more and more methods based on deep learning are proposed and achieve state-of-the-art performance in biomedical image segmentation. However, these methods are usually complex and require the support of powerful computing resources. According to the actual situation, it is impractical that we use huge computing resources in clinical situations. Thus, it is significant to develop accurate DL based biomedical image segmentation methods which depend on resources-constraint computing. Results A lightweight and multiscale network called PyConvU-Net is proposed to potentially work with low-resources computing. Through strictly controlled experiments, PyConvU-Net predictions have a good performance on three biomedical image segmentation tasks with the fewest parameters. Conclusions Our experimental results preliminarily demonstrate the potential of proposed PyConvU-Net in biomedical image segmentation with resources-constraint computing.


Author(s):  
Kevin Faust ◽  
Michael K Lee ◽  
Anglin Dent ◽  
Clare Fiala ◽  
Alessia Portante ◽  
...  

Abstract Background Modern molecular pathology workflows in neuro-oncology heavily rely on the integration of morphologic and immunohistochemical patterns for analysis, classification, and prognostication. However, despite the recent emergence of digital pathology platforms and artificial intelligence-driven computational image analysis tools, automating the integration of histomorphologic information found across these multiple studies is challenged by large files sizes of whole slide images (WSIs) and shifts/rotations in tissue sections introduced during slide preparation. Methods To address this, we develop a workflow that couples different computer vision tools including scale-invariant feature transform (SIFT) and deep learning to efficiently align and integrate histopathological information found across multiple independent studies. We highlight the utility and automation potential of this workflow in the molecular subclassification and discovery of previously unappreciated spatial patterns in diffuse gliomas. Results First, we show how a SIFT-driven computer vision workflow was effective at automated WSI alignment in a cohort of 107 randomly selected surgical neuropathology cases (97/107 (91%) showing appropriate matches, AUC = 0.96). This alignment allows our AI-driven diagnostic workflow to not only differentiate different brain tumor types, but also integrate and carry out molecular subclassification of diffuse gliomas using relevant immunohistochemical biomarkers (IDH1-R132H, ATRX). To highlight the discovery potential of this workflow, we also examined spatial distributions of tumors showing heterogenous expression of the proliferation marker MIB1 and Olig2. This analysis helped uncovered an interesting and unappreciated association of Olig2 positive and proliferative areas in some gliomas (r = 0.62). Conclusion This efficient neuropathologist-inspired workflow provides a generalizable approach to help automate a variety of advanced immunohistochemically compatible diagnostic and discovery exercises in surgical neuropathology and neuro-oncology.


Author(s):  
Hao Zheng ◽  
Lin Yang ◽  
Jianxu Chen ◽  
Jun Han ◽  
Yizhe Zhang ◽  
...  

Deep learning has been applied successfully to many biomedical image segmentation tasks. However, due to the diversity and complexity of biomedical image data, manual annotation for training common deep learning models is very timeconsuming and labor-intensive, especially because normally only biomedical experts can annotate image data well. Human experts are often involved in a long and iterative process of annotation, as in active learning type annotation schemes. In this paper, we propose representative annotation (RA), a new deep learning framework for reducing annotation effort in biomedical image segmentation. RA uses unsupervised networks for feature extraction and selects representative image patches for annotation in the latent space of learned feature descriptors, which implicitly characterizes the underlying data while minimizing redundancy. A fully convolutional network (FCN) is then trained using the annotated selected image patches for image segmentation. Our RA scheme offers three compelling advantages: (1) It leverages the ability of deep neural networks to learn better representations of image data; (2) it performs one-shot selection for manual annotation and frees annotators from the iterative process of common active learning based annotation schemes; (3) it can be deployed to 3D images with simple extensions. We evaluate our RA approach using three datasets (two 2D and one 3D) and show our framework yields competitive segmentation results comparing with state-of-the-art methods.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Liding Yao ◽  
Xiaojun Guan ◽  
Xiaowei Song ◽  
Yanbin Tan ◽  
Chun Wang ◽  
...  

AbstractRib fracture detection is time-consuming and demanding work for radiologists. This study aimed to introduce a novel rib fracture detection system based on deep learning which can help radiologists to diagnose rib fractures in chest computer tomography (CT) images conveniently and accurately. A total of 1707 patients were included in this study from a single center. We developed a novel rib fracture detection system on chest CT using a three-step algorithm. According to the examination time, 1507, 100 and 100 patients were allocated to the training set, the validation set and the testing set, respectively. Free Response ROC analysis was performed to evaluate the sensitivity and false positivity of the deep learning algorithm. Precision, recall, F1-score, negative predictive value (NPV) and detection and diagnosis were selected as evaluation metrics to compare the diagnostic efficiency of this system with radiologists. The radiologist-only study was used as a benchmark and the radiologist-model collaboration study was evaluated to assess the model’s clinical applicability. A total of 50,170,399 blocks (fracture blocks, 91,574; normal blocks, 50,078,825) were labelled for training. The F1-score of the Rib Fracture Detection System was 0.890 and the precision, recall and NPV values were 0.869, 0.913 and 0.969, respectively. By interacting with this detection system, the F1-score of the junior and the experienced radiologists had improved from 0.796 to 0.925 and 0.889 to 0.970, respectively; the recall scores had increased from 0.693 to 0.920 and 0.853 to 0.972, respectively. On average, the diagnosis time of radiologist assisted with this detection system was reduced by 65.3 s. The constructed Rib Fracture Detection System has a comparable performance with the experienced radiologist and is readily available to automatically detect rib fracture in the clinical setting with high efficacy, which could reduce diagnosis time and radiologists’ workload in the clinical practice.


2021 ◽  
Author(s):  
AkshatKumar Nigam ◽  
Robert Pollice ◽  
Mario Krenn ◽  
Gabriel dos Passos Gomes ◽  
Alan Aspuru-Guzik

Inverse design allows the design of molecules with desirable properties using property optimization. Deep generative models have recently been applied to tackle inverse design, as they possess the ability to optimize molecular properties directly through structure modification using gradients. While the ability to carry out direct property optimizations is promising, the use of generative deep learning models to solve practical problems requires large amounts of data and is very time-consuming. In this work, we propose STONED – a simple and efficient algorithm to perform interpolation and exploration in the chemical space, comparable to deep generative models. STONED bypasses the need for large amounts of data and training times by using string modifications in the SELFIES molecular representation. We achieve comparable performance on typical benchmarks without any training. We demonstrate applications in high-throughput virtual screening for the design of drugs, photovoltaics, and the construction of chemical paths, allowing for both property and structure-based interpolation in the chemical space. We anticipate our results to be a stepping stone for developing more sophisticated inverse design models and benchmarking tools, ultimately helping generative models achieve wide adoption.


2020 ◽  
Vol 1 (2) ◽  
pp. 44-51
Author(s):  
Paula Pereira ◽  
Tanara Kuhn

For images transfer, different embedding system exist which works by creating a mosaic image from the source image and recovery from the target image using some sort of algorithm. In current study, a method is proposed using the genetic algorithm for recovery of image from the source image. The algorithm utilized is genetic algorithm which is a search method along with another additional technique for obtaining higher robustness and security. The proposed methodology works by dividing the source image into smaller parts which are fitted into target image using the lossless compression. The mosaic image is recovered at retrieving side by the permutation array which is recovered and mapped using the pre-select key.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Masato Shimizu ◽  
shummo cho ◽  
Yoshiki Misu ◽  
Mari Ohmori ◽  
Ryo Tateishi ◽  
...  

Introduction: Takotsubo syndrome (TTS) and acute anterior myocardial infarction (ant-AMI) show very similar 12-lead electrocardiography (ECG) featured at onset, and it is often difficult to distinguish them without cardiac catheterization. The difference of ECG between them was studied, but the diagnostic performance of machine learning (deep learning) for them had not been investigated. Hypothesis: Deep learning on 12-leads ECG has high diagnostic performance to diagnose TTS and ant-AMI at onset. Methods: Consecutive 50 patients of TTS were one-to-one matched to ant-AMI randomly by their age and gender, and total 100 patients were enrolled. No sinus rhythm patients were excluded. All ECGs were divided into each 12-lead, and 5 heart beats from one lead were extracted. For each lead, 250 ECG waves of TTS/AMI were sampled as 24bit bitmap image, and prediction model construction by convolutional neural network (CNN: transfer learning, using VGG16 architecture) underwent to distinguish the two diseases in each lead. Next, gradient weighted class activation color mapping (GradCam) was performed to detect the degree and position of convolutional importance in the leads. Results: Lead aVR (mean accuracy 0.748), I (0.733), and V1 (0.678) were the top 3 leads with high accuracy. In aVR lead, GradCam showed strong convolution of negative T wave in TTS, and sharp R wave in ant-AMI. In I lead, it spotlighted several parts of ECG wave in ant-AMI. However in TTS, whole shape of the wave, P wave onset, and negative T were invertedly convoluted in TTS. Conclusions: Deep learning was a powerful tool to distinguish TTS and ant-AMI at onset, and GradCam method gave us new insight of the difference on ECG between the two diseases.


Sign in / Sign up

Export Citation Format

Share Document