scholarly journals colocr: An R package for conducting co-localization analysis on fluorescence microscopy images

Author(s):  
Mahmoud Ahmed ◽  
Trang Huyen Lai ◽  
Deok Ryong Kim

Background The co-localization analysis of fluorescence microscopy images is a widely used tech- nique in biological research. It is often used to determine the co-distribution of two proteins inside the cell, suggesting that these two proteins could be functionally or physically associated. The limiting step in conducting microscopy image analysis in a graphical interface tool is the selection of the regions of interest for the co-localization of two proteins. Implementation This package provides a simple straight forward workflow for loading fluorescence images, choosing regions of interest and calculating co-localization statistics. Included in the package is a shiny app that can be invoked locally to select the regions of interest where two proteins are interactively co-localized. Availability colocr is available on the comprehensive R archive network, and the source code is available on GitHub as part of the ROpenSci collection, https://github.com/ropensci/colocr. Keywords: R package, co-localization, image-analysis, fluorescence microscopy, statistics

2019 ◽  
Author(s):  
Mahmoud Ahmed ◽  
Trang Huyen Lai ◽  
Deok Ryong Kim

Background The co-localization analysis of fluorescence microscopy images is a widely used tech- nique in biological research. It is often used to determine the co-distribution of two proteins inside the cell, suggesting that these two proteins could be functionally or physically associated. The limiting step in conducting microscopy image analysis in a graphical interface tool is the selection of the regions of interest for the co-localization of two proteins. Implementation This package provides a simple straight forward workflow for loading fluorescence images, choosing regions of interest and calculating co-localization statistics. Included in the package is a shiny app that can be invoked locally to select the regions of interest where two proteins are interactively co-localized. Availability colocr is available on the comprehensive R archive network, and the source code is available on GitHub as part of the ROpenSci collection, https://github.com/ropensci/colocr. Keywords: R package, co-localization, image-analysis, fluorescence microscopy, statistics


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e7255
Author(s):  
Mahmoud Ahmed ◽  
Trang Huyen Lai ◽  
Deok Ryong Kim

Background The co-localization analysis of fluorescence microscopy images is a widely used technique in biological research. It is often used to determine the co-distribution of two proteins inside the cell, suggesting that these two proteins could be functionally or physically associated. The limiting step in conducting microscopy image analysis in a graphical interface tool is the selection of the regions of interest for the co-localization of two proteins. Implementation This package provides a simple straightforward workflow for loading fluorescence images, choosing regions of interest and calculating co-localization measurements. Included in the package is a shiny app that can be invoked locally to interactively select the regions of interest where two proteins are co-localized. Availability colocr is available on the comprehensive R archive network, and the source code is available on GitHub under the GPL-3 license as part of the ROpenSci collection, https://github.com/ropensci/colocr.


2019 ◽  
Author(s):  
Heeva Baharlou ◽  
Nicolas P Canete ◽  
Kirstie M Bertram ◽  
Kerrie J Sandgren ◽  
Anthony L Cunningham ◽  
...  

AbstractAutofluorescence is a long-standing problem that has hindered fluorescence microscopy image analysis. To address this, we have developed a method that identifies and removes autofluorescent signals from multi-channel images post acquisition. We demonstrate the broad utility of this algorithm in accurately assessing protein expression in situ through the removal of interfering autofluorescent signals.Availability and implementationhttps://ellispatrick.github.io/[email protected] informationSupplementary Figs. 1–13


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 22
Author(s):  
Yannis Kalaidzidis ◽  
Hernán Morales-Navarrete ◽  
Inna Kalaidzidis ◽  
Marino Zerial

Fluorescently targeted proteins are widely used for studies of intracellular organelles dynamic. Peripheral proteins are transiently associated with organelles and a significant fraction of them are located at the cytosol. Image analysis of peripheral proteins poses a problem on properly discriminating membrane-associated signal from the cytosolic one. In most cases, signals from organelles are compact in comparison with diffuse signal from cytosol. Commonly used methods for background estimation depend on the assumption that background and foreground signals are separable by spatial frequency filters. However, large non-stained organelles (e.g., nuclei) result in abrupt changes in the cytosol intensity and lead to errors in the background estimation. Such mistakes result in artifacts in the reconstructed foreground signal. We developed a new algorithm that estimates background intensity in fluorescence microscopy images and does not produce artifacts on the borders of nuclei.


2021 ◽  
Author(s):  
Christopher Mela ◽  
Yang Liu

Abstract Background Automated segmentation of nuclei in microscopic images has been conducted to enhance throughput in pathological diagnostics and biological research. Segmentation accuracy and speed has been significantly enhanced with the advent of convolutional neural networks. A barrier in the broad application of neural networks to nuclei segmentation is the necessity to train the network using a set of application specific images and image labels. Previous works have attempted to create broadly trained networks for universal nuclei segmentation; however, such networks do not work on all imaging modalities, and best results are still commonly found when the network is retrained on user specific data. Stochastic optical reconstruction microscopy (STORM) based super-resolution fluorescence microscopy has opened a new avenue to image nuclear architecture at nanoscale resolutions. Due to the large size and discontinuous features typical of super-resolution images, automatic nuclei segmentation can be difficult. In this study, we apply commonly used networks (Mask R-CNN and UNet architectures) towards the task of segmenting super-resolution images of nuclei. First, we assess whether networks broadly trained on conventional fluorescence microscopy datasets can accurately segment super-resolution images. Then, we compare the resultant segmentations with results obtained using networks trained directly on our super-resolution data. We next attempt to optimize and compare segmentation accuracy using three different neural network architectures. Results Results indicate that super-resolution images are not broadly compatible with neural networks trained on conventional bright-field or fluorescence microscopy images. When the networks were trained on super-resolution data, however, we attained nuclei segmentation accuracies (F1-Score) in excess of 0.8, comparable to past results found when conducting nuclei segmentation on conventional fluorescence microscopy images. Overall, we achieved the best results utilizing the Mask R-CNN architecture. Conclusions We found that convolutional neural networks are powerful tools capable of accurately and quickly segmenting localization-based super-resolution microscopy images of nuclei. While broadly trained and widely applicable segmentation algorithms are desirable for quick use with minimal input, optimal results are still found when the network is both trained and tested on visually similar images. We provide a set of Colab notebooks to disseminate the software into the broad scientific community (https://github.com/YangLiuLab/Super-Resolution-Nuclei-Segmentation).


Sign in / Sign up

Export Citation Format

Share Document