scholarly journals Rotation equivariant and invariant neural networks for microscopy image analysis

2019 ◽  
Vol 35 (14) ◽  
pp. i530-i537 ◽  
Author(s):  
Benjamin Chidester ◽  
Tianming Zhou ◽  
Minh N Do ◽  
Jian Ma

Abstract Motivation Neural networks have been widely used to analyze high-throughput microscopy images. However, the performance of neural networks can be significantly improved by encoding known invariance for particular tasks. Highly relevant to the goal of automated cell phenotyping from microscopy image data is rotation invariance. Here we consider the application of two schemes for encoding rotation equivariance and invariance in a convolutional neural network, namely, the group-equivariant CNN (G-CNN), and a new architecture with simple, efficient conic convolution, for classifying microscopy images. We additionally integrate the 2D-discrete-Fourier transform (2D-DFT) as an effective means for encoding global rotational invariance. We call our new method the Conic Convolution and DFT Network (CFNet). Results We evaluated the efficacy of CFNet and G-CNN as compared to a standard CNN for several different image classification tasks, including simulated and real microscopy images of subcellular protein localization, and demonstrated improved performance. We believe CFNet has the potential to improve many high-throughput microscopy image analysis applications. Availability and implementation Source code of CFNet is available at: https://github.com/bchidest/CFNet. Supplementary information Supplementary data are available at Bioinformatics online.

2019 ◽  
Author(s):  
Heeva Baharlou ◽  
Nicolas P Canete ◽  
Kirstie M Bertram ◽  
Kerrie J Sandgren ◽  
Anthony L Cunningham ◽  
...  

AbstractAutofluorescence is a long-standing problem that has hindered fluorescence microscopy image analysis. To address this, we have developed a method that identifies and removes autofluorescent signals from multi-channel images post acquisition. We demonstrate the broad utility of this algorithm in accurately assessing protein expression in situ through the removal of interfering autofluorescent signals.Availability and implementationhttps://ellispatrick.github.io/[email protected] informationSupplementary Figs. 1–13


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Fuyong Xing ◽  
Yuanpu Xie ◽  
Xiaoshuang Shi ◽  
Pingjun Chen ◽  
Zizhao Zhang ◽  
...  

Abstract Background Nucleus or cell detection is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. Results We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. Conclusions We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.


Author(s):  
Saad Ullah Akram ◽  
Juho Kannala ◽  
Lauri Eklund ◽  
Janne Heikkila

Author(s):  
Zhichao Liu ◽  
Luhong Jin ◽  
Jincheng Chen ◽  
Qiuyu Fang ◽  
Sergey Ablameyko ◽  
...  

2019 ◽  
Vol 35 (21) ◽  
pp. 4525-4527 ◽  
Author(s):  
Alex X Lu ◽  
Taraneh Zarin ◽  
Ian S Hsu ◽  
Alan M Moses

Abstract Summary We introduce YeastSpotter, a web application for the segmentation of yeast microscopy images into single cells. YeastSpotter is user-friendly and generalizable, reducing the computational expertise required for this critical preprocessing step in many image analysis pipelines. Availability and implementation YeastSpotter is available at http://yeastspotter.csb.utoronto.ca/. Code is available at https://github.com/alexxijielu/yeast_segmentation. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document