scholarly journals DetecTiff©: A Novel Image Analysis Routine for High-Content Screening Microscopy

2009 ◽  
Vol 14 (8) ◽  
pp. 944-955 ◽  
Author(s):  
Daniel F. Gilbert ◽  
Till Meinhof ◽  
Rainer Pepperkok ◽  
Heiko Runz

In this article, the authors describe the image analysis software DetecTiff©, which allows fully automated object recognition and quantification from digital images. The core module of the LabView©-based routine is an algorithm for structure recognition that employs intensity thresholding and size-dependent particle filtering from microscopic images in an iterative manner. Detected structures are converted into templates, which are used for quantitative image analysis. DetecTiff © enables processing of multiple detection channels and provides functions for template organization and fast interpretation of acquired data. The authors demonstrate the applicability of DetecTiff© for automated analysis of cellular uptake of fluorescencelabeled low-density lipoproteins as well as diverse other image data sets from a variety of biomedical applications. Moreover, the performance of DetecTiff© is compared with preexisting image analysis tools. The results show that DetecTiff© can be applied with high consistency for automated quantitative analysis of image data (e.g., from large-scale functional RNAi screening projects). ( Journal of Biomolecular Screening 2009:944-955)

2003 ◽  
Vol 9 (1) ◽  
pp. 1-17 ◽  
Author(s):  
Paul G. Kotula ◽  
Michael R. Keenan ◽  
Joseph R. Michael

Spectral imaging in the scanning electron microscope (SEM) equipped with an energy-dispersive X-ray (EDX) analyzer has the potential to be a powerful tool for chemical phase identification, but the large data sets have, in the past, proved too large to efficiently analyze. In the present work, we describe the application of a new automated, unbiased, multivariate statistical analysis technique to very large X-ray spectral image data sets. The method, based in part on principal components analysis, returns physically accurate (all positive) component spectra and images in a few minutes on a standard personal computer. The efficacy of the technique for microanalysis is illustrated by the analysis of complex multi-phase materials, particulates, a diffusion couple, and a single-pixel-detection problem.


2019 ◽  
Author(s):  
Robert Krueger ◽  
Johanna Beyer ◽  
Won-Dong Jang ◽  
Nam Wook Kim ◽  
Artem Sokolov ◽  
...  

AbstractFacetto is a scalable visual analytics application that is used to discover single-cell phenotypes in high-dimensional multi-channel microscopy images of human tumors and tissues. Such images represent the cutting edge of digital histology and promise to revolutionize how diseases such as cancer are studied, diagnosed, and treated. Highly multiplexed tissue images are complex, comprising 109or more pixels, 60-plus channels, and millions of individual cells. This makes manual analysis challenging and error-prone. Existing automated approaches are also inadequate, in large part, because they are unable to effectively exploit the deep knowledge of human tissue biology available to anatomic pathologists. To overcome these challenges, Facetto enables a semi-automated analysis of cell types and states. It integrates unsupervised and supervised learning into the image and feature exploration process and offers tools for analytical provenance. Experts can cluster the data to discover new types of cancer and immune cells and use clustering results to train a convolutional neural network that classifies new cells accordingly. Likewise, the output of classifiers can be clustered to discover aggregate patterns and phenotype subsets. We also introduce a new hierarchical approach to keep track of analysis steps and data subsets created by users; this assists in the identification of cell types. Users can build phenotype trees and interact with the resulting hierarchical structures of both high-dimensional feature and image spaces. We report on use-cases in which domain scientists explore various large-scale fluorescence imaging datasets. We demonstrate how Facetto assists users in steering the clustering and classification process, inspecting analysis results, and gaining new scientific insights into cancer biology.


2008 ◽  
Vol 08 (02) ◽  
pp. 243-263 ◽  
Author(s):  
BENJAMIN A. AHLBORN ◽  
OLIVER KREYLOS ◽  
SOHAIL SHAFII ◽  
BERND HAMANN ◽  
OLIVER G. STAADT

We introduce a system that adds a foveal inset to large-scale projection displays. The effective resolution of the foveal inset projection is higher than the original display resolution, allowing the user to see more details and finer features in large data sets. The foveal inset is generated by projecting a high-resolution image onto a mirror mounted on a panCtilt unit that is controlled by the user with a laser pointer. Our implementation is based on Chromium and supports many OpenGL applications without modifications.We present experimental results using high-resolution image data from medical imaging and aerial photography.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Douwe van der Wal ◽  
Iny Jhun ◽  
Israa Laklouk ◽  
Jeff Nirschl ◽  
Lara Richer ◽  
...  

AbstractBiology has become a prime area for the deployment of deep learning and artificial intelligence (AI), enabled largely by the massive data sets that the field can generate. Key to most AI tasks is the availability of a sufficiently large, labeled data set with which to train AI models. In the context of microscopy, it is easy to generate image data sets containing millions of cells and structures. However, it is challenging to obtain large-scale high-quality annotations for AI models. Here, we present HALS (Human-Augmenting Labeling System), a human-in-the-loop data labeling AI, which begins uninitialized and learns annotations from a human, in real-time. Using a multi-part AI composed of three deep learning models, HALS learns from just a few examples and immediately decreases the workload of the annotator, while increasing the quality of their annotations. Using a highly repetitive use-case—annotating cell types—and running experiments with seven pathologists—experts at the microscopic analysis of biological specimens—we demonstrate a manual work reduction of 90.60%, and an average data-quality boost of 4.34%, measured across four use-cases and two tissue stain types.


2021 ◽  
pp. 1-12
Author(s):  
Julia Fuchs ◽  
Olivia Nonn ◽  
Christine Daxboeck ◽  
Silvia Groiss ◽  
Gerit Moser ◽  
...  

Immunostaining in clinical routine and research highly depends on standardized staining methods and quantitative image analyses. We qualitatively and quantitatively compared antigen retrieval methods (no pretreatment, pretreatment with pepsin, and heat-induced pretreatment with pH 6 or pH 9) for 17 antibodies relevant for placenta and implantation diagnostics and research. Using our newly established, comprehensive automated quantitative image analysis approach, fluorescent signal intensities were evaluated. Automated quantitative image analysis found that 9 out of 17 antibodies needed antigen retrieval to show positive staining. Heat induction proved to be the most efficient form of antigen retrieval. Eight markers stained positive after pepsin digestion, with β-hCG and vWF showing enhanced staining intensities. To avoid the misinterpretation of quantitative image data, the qualitative aspect should always be considered. Results from native placental tissue were compared with sections of a placental invasion model based on thermo-sensitive scaffolds. Immunostaining on placentas in vitro leads to new insights into fetal development and maternal pathophysiological pathways, as pregnant women are justifiably excluded from clinical studies. Thus, there is a clear need for the assessment of reliable immunofluorescent staining and pretreatment methods. Our evaluation offers a powerful tool for antibody and pretreatment selection in placental research providing objective and precise results.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Lukasz Zwolinski ◽  
Marta Kozak ◽  
Karol Kozak

Technological advancements are constantly increasing the size and complexity of data resulting from large-scale RNA interference screens. This fact has led biologists to ask complex questions, which the existing, fully automated analyses are often not adequate to answer. We present a concept of 1Click1View (1C1V) as a methodology for interactive analytic software tools. 1C1V can be applied for two-dimensional visualization of image-based screening data sets from High Content Screening (HCS). Through an easy-to-use interface, one-click, one-view concept, and workflow based architecture, visualization method facilitates the linking of image data with numeric data. Such method utilizes state-of-the-art interactive visualization tools optimized for fast visualization of large scale image data sets. We demonstrate our method on an HCS dataset consisting of multiple cell features from two screening assays.


2012 ◽  
Vol 9 (2) ◽  
pp. 16-18 ◽  
Author(s):  
Christian Klukas ◽  
Jean-Michel Pape ◽  
Alexander Entzian

Summary This work presents a sophisticated information system, the Integrated Analysis Platform (IAP), an approach supporting large-scale image analysis for different species and imaging systems. In its current form, IAP supports the investigation of Maize, Barley and Arabidopsis plants based on images obtained in different spectra. Several components of the IAP system, which are described in this work, cover the complete end-to-end pipeline, starting with the image transfer from the imaging infrastructure, (grid distributed) image analysis, data management for raw data and analysis results, to the automated generation of experiment reports.


Author(s):  
Thomas Hierl ◽  
Heike Huempfner-Hierl ◽  
Daniel Kruber ◽  
Thomas Gaebler ◽  
Alexander Hemprich ◽  
...  

This chapter discusses the requirements of an image analysis tool designed for dentistry and oral and maxillofacial surgery focussing on 3D-image data. As software for the analysis of all the different types of medical 3D-data is not available, a model software based on VTK (visualization toolkit) is presented. VTK is a free modular software which can be tailored to individual demands. First, the most important types of image data are shown, then the operations needed to handle the data sets. Metric analysis is covered in-depth as it forms the basis of orthodontic and surgery planning. Finally typical examples of different fields of dentistry are given.


Author(s):  
FULIN LUO ◽  
JIAMIN LIU ◽  
HONG HUANG ◽  
YUMEI LIU

Locally linear embedding (LLE) depends on the Euclidean distance (ED) to select the k-nearest neighbors. However, the ED may not reflect the actual geometry structure of data, which may lead to the selection of ineffective neighbors. The aim of our work is to make full use of the local spectral angle (LSA) to find proper neighbors for dimensionality reduction (DR) and classification of hyperspectral remote sensing data. At first, we propose an improved LLE method, called local spectral angle LLE (LSA-LLE), for DR. It uses the ED of data to obtain large-scale neighbors, then utilizes the spectral angle to get the exact neighbors in the large-scale neighbors. Furthermore, a local spectral angle-based nearest neighbor classifier (LSANN) has been proposed for classification. Experiments on two hyperspectral image data sets demonstrate the effectiveness of the presented methods.


Sign in / Sign up

Export Citation Format

Share Document