scholarly journals FalseColor-Python: a rapid intensity-leveling and digital-staining package for fluorescence-based slide-free digital pathology

Author(s):  
Robert Serafin ◽  
Weisi Xie ◽  
Adam K. Glaser ◽  
Jonathan T. C Liu

AbstractSlide-free digital pathology techniques, including nondestructive 3D microscopy, are gaining interest as alternatives to traditional slide-based histology. In order to facilitate clinical adoption of these fluorescence-based techniques, software methods have been developed to convert grayscale fluorescence images into color images that mimic the appearance of standard absorptive chromogens such as hematoxylin and eosin (H&E). However, these false-coloring algorithms often require manual and iterative adjustment of parameters, with results that can be inconsistent in the presence of intensity nonuniformities within an image and/or between specimens (intra- and inter-specimen variability). Here, we present an open-source (Python-based) rapid intensity-leveling and digital-staining package that is specifically designed to render two-channel fluorescence images (i.e. a fluorescent analog of H&E) to the traditional H&E color space for 2D and 3D microscopy datasets. However, this method can be easily tailored for other false-coloring needs. Our package offers (1) automated and uniform false coloring in spite of uneven staining within a large thick specimen, (2) consistent color-space representations that are robust to variations in staining and imaging conditions between different specimens, and (3) GPU-accelerated data processing to allow these methods to scale to large datasets. We demonstrate this platform by generating H&E-like images from cleared tissues that are fluorescently imaged in 3D with open-top light-sheet (OTLS) microscopy, and quantitatively characterizing the results in comparison to traditional slide-based H&E histology.

Author(s):  
Liron Pantanowitz ◽  
Pamela Michelow ◽  
Scott Hazelhurst ◽  
Shivam Kalra ◽  
Charles Choi ◽  
...  

Context.— Pathologists may encounter extraneous pieces of tissue (tissue floaters) on glass slides because of specimen cross-contamination. Troubleshooting this problem, including performing molecular tests for tissue identification if available, is time consuming and often does not satisfactorily resolve the problem. Objective.— To demonstrate the feasibility of using an image search tool to resolve the tissue floater conundrum. Design.— A glass slide was produced containing 2 separate hematoxylin and eosin (H&E)-stained tissue floaters. This fabricated slide was digitized along with the 2 slides containing the original tumors used to create these floaters. These slides were then embedded into a dataset of 2325 whole slide images comprising a wide variety of H&E stained diagnostic entities. Digital slides were broken up into patches and the patch features converted into barcodes for indexing and easy retrieval. A deep learning-based image search tool was employed to extract features from patches via barcodes, hence enabling image matching to each tissue floater. Results.— There was a very high likelihood of finding a correct tumor match for the queried tissue floater when searching the digital database. Search results repeatedly yielded a correct match within the top 3 retrieved images. The retrieval accuracy improved when greater proportions of the floater were selected. The time to run a search was completed within several milliseconds. Conclusions.— Using an image search tool offers pathologists an additional method to rapidly resolve the tissue floater conundrum, especially for those laboratories that have transitioned to going fully digital for primary diagnosis.


Author(s):  
Yue Guo ◽  
Oleh Krupa ◽  
Jason Stein ◽  
Guorong Wu ◽  
Ashok Krishnamurthy

2020 ◽  
Vol 10 (18) ◽  
pp. 6392
Author(s):  
Xieliu Yang ◽  
Chenyu Yin ◽  
Ziyu Zhang ◽  
Yupeng Li ◽  
Wenfeng Liang ◽  
...  

Recovering correct or at least realistic colors of underwater scenes is a challenging issue for image processing due to the unknown imaging conditions including the optical water type, scene location, illumination, and camera settings. With the assumption that the illumination of the scene is uniform, a chromatic adaptation-based color correction technology is proposed in this paper to remove the color cast using a single underwater image without any other information. First, the underwater RGB image is first linearized to make its pixel values proportional to the light intensities arrived at the pixels. Second, the illumination is estimated in a uniform chromatic space based on the white-patch hypothesis. Third, the chromatic adaptation transform is implemented in the device-independent XYZ color space. Qualitative and quantitative evaluations both show that the proposed method outperforms the other test methods in terms of color restoration, especially for the images with severe color cast. The proposed method is simple yet effective and robust, which is helpful in obtaining the in-air images of underwater scenes.


2017 ◽  
Vol 114 (37) ◽  
pp. 9797-9802 ◽  
Author(s):  
Jörn Heine ◽  
Matthias Reuss ◽  
Benjamin Harke ◽  
Elisa D’Este ◽  
Steffen J. Sahl ◽  
...  

The concepts called STED/RESOLFT superresolve features by a light-driven transfer of closely packed molecules between two different states, typically a nonfluorescent “off” state and a fluorescent “on” state at well-defined coordinates on subdiffraction scales. For this, the applied light intensity must be sufficient to guarantee the state difference for molecules spaced at the resolution sought. Relatively high intensities have therefore been applied throughout the imaging to obtain the highest resolutions. At regions where features are far enough apart that molecules could be separated with lower intensity, the excess intensity just adds to photobleaching. Here, we introduce DyMIN (standing for Dynamic Intensity Minimum) scanning, generalizing and expanding on earlier concepts of RESCue and MINFIELD to reduce sample exposure. The principle of DyMIN is that it only uses as much on/off-switching light as needed to image at the desired resolution. Fluorescence can be recorded at those positions where fluorophores are found within a subresolution neighborhood. By tuning the intensity (and thus resolution) during the acquisition of each pixel/voxel, we match the size of this neighborhood to the structures being imaged. DyMIN is shown to lower the dose of STED light on the scanned region up to ∼20-fold under common biological imaging conditions, and >100-fold for sparser 2D and 3D samples. The bleaching reduction can be converted into accordingly brighter images at <30-nm resolution.


Author(s):  
Carole Frindel ◽  
Charlotte Riviere ◽  
Rosa Huaman ◽  
Andrea BASSI ◽  
David Rousseau

2017 ◽  
Vol 1143 ◽  
pp. 7-12
Author(s):  
Maria Baciu ◽  
Elena Raluca Baciu ◽  
Ramona Cimpoeşu ◽  
Irina Gradinaru

The investigations conducted aimed at determining the microstructural and chemical modifications produced in a Ni-Cr-Mo alloy following electrocorrosion in Afnor artificial saliva by using SEM electronic microscopy and EDS chemical analysis. By 2D and 3D microscopy and by qualitative determinations of the luminous variation we could notice the effects of electrocorrosion tests on the surface of the metallic material, and by EDAX qualitative and quantitative determinations (Point, Line and Mapping modes) of the surface chemical composition we could determine the chemical modifications produced following the corrosion tests.


Cellulose ◽  
2019 ◽  
Vol 26 (3) ◽  
pp. 2099-2108 ◽  
Author(s):  
N. J. McIntosh ◽  
Y. Sharma ◽  
D. M. Martinez ◽  
J. A. Olson ◽  
A. B. Phillion

Author(s):  
Yu-Xuan Ren ◽  
Jianglai Wu ◽  
Queenie T.K. Lai ◽  
Kenneth K. Y. Wong ◽  
Kevin K. Tsia

2017 ◽  
Author(s):  
Gregory R. Johnson ◽  
Rory M. Donovan-Maiye ◽  
Mary M. Maleckar

AbstractWe present a conditional generative model for learning variation in cell and nuclear morphology and predicting the location of subcellular structures from 3D microscopy images. The model generalizes well to a wide array of structures and allows for a probabilistic interpretation of cell and nuclear morphology and structure localization from fluorescence images. We demonstrate the effectiveness of the approach by producing and evaluating photo-realistic 3D cell images using the generative model, and show that the conditional nature of the model provides the ability to predict the localization of unobserved structures, given cell and nuclear morphology. We additionally explore the model’s utility in a number of applications, including cellular integration from multiple experiments and exploration of variation in structure localization. Finally, we discuss the model in the context of foundational and contemporary work and suggest forthcoming extensions.


F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 787 ◽  
Author(s):  
Kasey J. Day ◽  
Patrick J. La Rivière ◽  
Talon Chandler ◽  
Vytas P. Bindokas ◽  
Nicola J. Ferrier ◽  
...  

Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.


Sign in / Sign up

Export Citation Format

Share Document