LOBSTER: an environment to design bioimage analysis workflows for large and complex fluorescence microscopy data

2019 ◽  
Vol 36 (8) ◽  
pp. 2634-2635 ◽  
Author(s):  
Sébastien Tosi ◽  
Lídia Bardia ◽  
Maria Jose Filgueira ◽  
Alexandre Calon ◽  
Julien Colombelli

Abstract Summary Open source software such as ImageJ and CellProfiler greatly simplified the quantitative analysis of microscopy images but their applicability is limited by the size, dimensionality and complexity of the images under study. In contrast, software optimized for the needs of specific research projects can overcome these limitations, but they may be harder to find, set up and customize to different needs. Overall, the analysis of large, complex, microscopy images is hence still a critical bottleneck for many Life Scientists. We introduce LOBSTER (Little Objects Segmentation and Tracking Environment), an environment designed to help scientists design and customize image analysis workflows to accurately characterize biological objects from a broad range of fluorescence microscopy images, including large images exceeding workstation main memory. LOBSTER comes with a starting set of over 75 sample image analysis workflows and associated images stemming from state-of-the-art image-based research projects. Availability and implementation LOBSTER requires MATLAB (version ≥ 2015a), MATLAB Image processing toolbox, and MATLAB statistics and machine learning toolbox. Code source, online tutorials, video demonstrations, documentation and sample images are freely available from: https://sebastients.github.io. Supplementary information Supplementary data are available at Bioinformatics online.

Author(s):  
Heeva Baharlou ◽  
Nicolas P Canete ◽  
Kirstie M Bertram ◽  
Kerrie J Sandgren ◽  
Anthony L Cunningham ◽  
...  

Abstract Motivation Autofluorescence is a long-standing problem that has hindered the analysis of images of tissues acquired by fluorescence microscopy. Current approaches to mitigate autofluorescence in tissue are lab-based and involve either chemical treatment of sections or specialized instrumentation and software to ‘unmix’ autofluorescent signals. Importantly, these approaches are pre-emptive and there are currently no methods to deal with autofluorescence in acquired fluorescence microscopy images. Results To address this, we developed Autofluorescence Identifier (AFid). AFid identifies autofluorescent pixels as discrete objects in multi-channel images post-acquisition. These objects can then be tagged for exclusion from downstream analysis. We validated AFid using images of FFPE human colorectal tissue stained for common immune markers. Further, we demonstrate its utility for image analysis where its implementation allows the accurate measurement of HIV–Dendritic cell interactions in a colorectal explant model of HIV transmission. Therefore, AFid represents a major leap forward in the extraction of useful data from images plagued by autofluorescence by offering an approach that is easily incorporated into existing workflows and that can be used with various samples, staining panels and image acquisition methods. We have implemented AFid in ImageJ, Matlab and R to accommodate the diverse image analysis community. Availability and implementation AFid software is available at https://ellispatrick.github.io/AFid. Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Author(s):  
Heeva Baharlou ◽  
Nicolas P Canete ◽  
Kirstie M Bertram ◽  
Kerrie J Sandgren ◽  
Anthony L Cunningham ◽  
...  

AbstractAutofluorescence is a long-standing problem that has hindered fluorescence microscopy image analysis. To address this, we have developed a method that identifies and removes autofluorescent signals from multi-channel images post acquisition. We demonstrate the broad utility of this algorithm in accurately assessing protein expression in situ through the removal of interfering autofluorescent signals.Availability and implementationhttps://ellispatrick.github.io/[email protected] informationSupplementary Figs. 1–13


2019 ◽  
Vol 35 (21) ◽  
pp. 4525-4527 ◽  
Author(s):  
Alex X Lu ◽  
Taraneh Zarin ◽  
Ian S Hsu ◽  
Alan M Moses

Abstract Summary We introduce YeastSpotter, a web application for the segmentation of yeast microscopy images into single cells. YeastSpotter is user-friendly and generalizable, reducing the computational expertise required for this critical preprocessing step in many image analysis pipelines. Availability and implementation YeastSpotter is available at http://yeastspotter.csb.utoronto.ca/. Code is available at https://github.com/alexxijielu/yeast_segmentation. Supplementary information Supplementary data are available at Bioinformatics online.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 22
Author(s):  
Yannis Kalaidzidis ◽  
Hernán Morales-Navarrete ◽  
Inna Kalaidzidis ◽  
Marino Zerial

Fluorescently targeted proteins are widely used for studies of intracellular organelles dynamic. Peripheral proteins are transiently associated with organelles and a significant fraction of them are located at the cytosol. Image analysis of peripheral proteins poses a problem on properly discriminating membrane-associated signal from the cytosolic one. In most cases, signals from organelles are compact in comparison with diffuse signal from cytosol. Commonly used methods for background estimation depend on the assumption that background and foreground signals are separable by spatial frequency filters. However, large non-stained organelles (e.g., nuclei) result in abrupt changes in the cytosol intensity and lead to errors in the background estimation. Such mistakes result in artifacts in the reconstructed foreground signal. We developed a new algorithm that estimates background intensity in fluorescence microscopy images and does not produce artifacts on the borders of nuclei.


2019 ◽  
Vol 35 (14) ◽  
pp. i530-i537 ◽  
Author(s):  
Benjamin Chidester ◽  
Tianming Zhou ◽  
Minh N Do ◽  
Jian Ma

Abstract Motivation Neural networks have been widely used to analyze high-throughput microscopy images. However, the performance of neural networks can be significantly improved by encoding known invariance for particular tasks. Highly relevant to the goal of automated cell phenotyping from microscopy image data is rotation invariance. Here we consider the application of two schemes for encoding rotation equivariance and invariance in a convolutional neural network, namely, the group-equivariant CNN (G-CNN), and a new architecture with simple, efficient conic convolution, for classifying microscopy images. We additionally integrate the 2D-discrete-Fourier transform (2D-DFT) as an effective means for encoding global rotational invariance. We call our new method the Conic Convolution and DFT Network (CFNet). Results We evaluated the efficacy of CFNet and G-CNN as compared to a standard CNN for several different image classification tasks, including simulated and real microscopy images of subcellular protein localization, and demonstrated improved performance. We believe CFNet has the potential to improve many high-throughput microscopy image analysis applications. Availability and implementation Source code of CFNet is available at: https://github.com/bchidest/CFNet. Supplementary information Supplementary data are available at Bioinformatics online.


2020 ◽  
Vol 36 (9) ◽  
pp. 2948-2949
Author(s):  
Ervin A Tasnadi ◽  
Timea Toth ◽  
Maria Kovacs ◽  
Akos Diosdi ◽  
Francesco Pampaloni ◽  
...  

Abstract Summary Segmentation of single cells in microscopy images is one of the major challenges in computational biology. It is the first step of most bioimage analysis tasks, and essential to create training sets for more advanced deep learning approaches. Here, we propose 3D-Cell-Annotator to solve this task using 3D active surfaces together with shape descriptors as prior information in a semi-automated fashion. The software uses the convenient 3D interface of the widely used Medical Imaging Interaction Toolkit (MITK). Results on 3D biological structures (e.g. spheroids, organoids and embryos) show that the precision of the segmentation reaches the level of a human expert. Availability and implementation 3D-Cell-Annotator is implemented in CUDA/C++ as a patch for the segmentation module of MITK. The 3D-Cell-Annotator enabled MITK distribution can be downloaded at: www.3D-cell-annotator.org. It works under Windows 64-bit systems and recent Linux distributions even on a consumer level laptop with a CUDA-enabled video card using recent NVIDIA drivers. Supplementary information Supplementary data are available at Bioinformatics online.


2014 ◽  
Vol 197 (4) ◽  
pp. 699-709 ◽  
Author(s):  
Jordi van Gestel ◽  
Hera Vlamakis ◽  
Roberto Kolter

Fluorescence microscopy is a method commonly used to examine individual differences between bacterial cells, yet many studies still lack a quantitative analysis of fluorescence microscopy data. Here we introduce some simple tools that microbiologists can use to analyze and compare their microscopy images. We show how image data can be converted to distribution data. These data can be subjected to a cluster analysis that makes it possible to objectively compare microscopy images. The distribution data can further be analyzed using distribution fitting. We illustrate our methods by scrutinizing two independently acquired data sets, each containing microscopy images of a doubly labeledBacillus subtilisstrain. For the first data set, we examined the expression ofsrfAandtapA, two genes which are expressed in surfactin-producing and matrix-producing cells, respectively. For the second data set, we examined the expression ofepsandtapA; these genes are expressed in matrix-producing cells. We show thatsrfAis expressed by all cells in the population, a finding which contrasts with a previously reported bimodal distribution ofsrfAexpression. In addition, we show thatepsandtapAdo not always have the same expression profiles, despite being expressed in the same cell type: both operons are expressed in cell chains, while single cells mainly expresseps. These findings exemplify that the quantification and comparison of microscopy data can yield insights that otherwise would go unnoticed.


2019 ◽  
Author(s):  
Mahmoud Ahmed ◽  
Trang Huyen Lai ◽  
Deok Ryong Kim

Background The co-localization analysis of fluorescence microscopy images is a widely used tech- nique in biological research. It is often used to determine the co-distribution of two proteins inside the cell, suggesting that these two proteins could be functionally or physically associated. The limiting step in conducting microscopy image analysis in a graphical interface tool is the selection of the regions of interest for the co-localization of two proteins. Implementation This package provides a simple straight forward workflow for loading fluorescence images, choosing regions of interest and calculating co-localization statistics. Included in the package is a shiny app that can be invoked locally to select the regions of interest where two proteins are interactively co-localized. Availability colocr is available on the comprehensive R archive network, and the source code is available on GitHub as part of the ROpenSci collection, https://github.com/ropensci/colocr. Keywords: R package, co-localization, image-analysis, fluorescence microscopy, statistics


Sign in / Sign up

Export Citation Format

Share Document