Algorithms for automated montage synthesis of images from laser-scanning confocal microscopes

Author(s):  
D. E. Becker

An efficient, robust, and widely-applicable technique is presented for computational synthesis of high-resolution, wide-area images of a specimen from a series of overlapping partial views. This technique can also be used to combine the results of various forms of image analysis, such as segmentation, automated cell counting, deblurring, and neuron tracing, to generate representations that are equivalent to processing the large wide-area image, rather than the individual partial views. This can be a first step towards quantitation of the higher-level tissue architecture. The computational approach overcomes mechanical limitations, such as hysterisis and backlash, of microscope stages. It also automates a procedure that is currently done manually. One application is the high-resolution visualization and/or quantitation of large batches of specimens that are much wider than the field of view of the microscope.The automated montage synthesis begins by computing a concise set of landmark points for each partial view. The type of landmarks used can vary greatly depending on the images of interest. In many cases, image analysis performed on each data set can provide useful landmarks. Even when no such “natural” landmarks are available, image processing can often provide useful landmarks.

Author(s):  
D.E. Becker ◽  
H. Ancin ◽  
B. Roysam ◽  
J.N. Turner

We present an efficient, robust, and widely-applicable technique for computational synthesis of wide-area images from a series of overlapping partial views. The synthesized image is the set union of the areas covered by the partial views, and is called the “mosaic”. One application is the laser-scanning confocal microscopy of specimens that are much wider than the field of view of the microscope. Another is imaging of the retinal periphery using a standard fundus imager. This technique can also be used to combine the results of various forms of image analysis, such as cell counting and neuron tracing, to generate large representations that are equivalent to processing the total mosaic, rather than the individual partial views.The synthesis begins by computing a concise set of landmark points for each partial view. The type of landmarks used can vary greatly depending on the application. For instance, in the retinal imaging application, the vascular branching and crossover points are a natural choice. Likewise, the locations of cells in Figs. 1 and 2 provide a natural set of landmarks for joining these images.


Author(s):  
R. Tenschert

<p><strong>Abstract.</strong> While non-destructive 3D technologies offer outstanding possibilities for analysing shape and similarities in architectural details, and for the monitoring of weathering effects, it has so far been used only rarely for these purposes. This paper shows the application and analysis of high resolution, handheld, optical tracked laser scanning on an inscription at the cathedral of Notre Dame in Paris. The transept’s south façade carries a latin inscription dating from 1258, and the common research opinion is that the inscription was copied and renewed during the mid-19th century restoration. In the course of an on-site research campaign, some doubt as to the veracity of this theory arose. Essential questions regarding the inscription concern the workflows of both medieval craftsmen and those from the 19th century. The project’s aim was to analyse the inscription for its shape and for any traces left by the craftsmen. Another key question focussed on the originality and authenticity of the inscription. The analysis of the high-resolution 3D data set has confirmed the initial visual impression of differences between the stones and shown that most of the inscription is the 13th century original with only a few parts replaced. The analysis also revealed that the ribbon and the letters must have been carved before the stones were placed. An investigation using historical transcripts, comparative examples and contextual reflections with a detailed analysis of the individual letters also revealed possible changes in the wording of the inscription made during the restoration. A discussion of the possible variants supported by virtual visualisations is also presented.</p>


2000 ◽  
Vol 20 (1) ◽  
pp. 7-15 ◽  
Author(s):  
R. Heintzmann ◽  
G. Kreth ◽  
C. Cremer

Fluorescent confocal laser scanning microscopy allows an improved imaging of microscopic objects in three dimensions. However, the resolution along the axial direction is three times worse than the resolution in lateral directions. A method to overcome this axial limitation is tilting the object under the microscope, in a way that the direction of the optical axis points into different directions relative to the sample. A new technique for a simultaneous reconstruction from a number of such axial tomographic confocal data sets was developed and used for high resolution reconstruction of 3D‐data both from experimental and virtual microscopic data sets. The reconstructed images have a highly improved 3D resolution, which is comparable to the lateral resolution of a single deconvolved data set. Axial tomographic imaging in combination with simultaneous data reconstruction also opens the possibility for a more precise quantification of 3D data. The color images of this publication can be accessed from http://www.esacp.org/acp/2000/20‐1/heintzmann.htm. At this web address an interactive 3D viewer is additionally provided for browsing the 3D data. This java applet displays three orthogonal slices of the data set which are dynamically updated by user mouse clicks or keystrokes.


2021 ◽  
Vol 17 (11) ◽  
pp. e1008946
Author(s):  
Niksa Praljak ◽  
Shamreen Iram ◽  
Utku Goreke ◽  
Gundeep Singh ◽  
Ailis Hill ◽  
...  

Sickle cell disease, a genetic disorder affecting a sizeable global demographic, manifests in sickle red blood cells (sRBCs) with altered shape and biomechanics. sRBCs show heightened adhesive interactions with inflamed endothelium, triggering painful vascular occlusion events. Numerous studies employ microfluidic-assay-based monitoring tools to quantify characteristics of adhered sRBCs from high resolution channel images. The current image analysis workflow relies on detailed morphological characterization and cell counting by a specially trained worker. This is time and labor intensive, and prone to user bias artifacts. Here we establish a morphology based classification scheme to identify two naturally arising sRBC subpopulations—deformable and non-deformable sRBCs—utilizing novel visual markers that link to underlying cell biomechanical properties and hold promise for clinically relevant insights. We then set up a standardized, reproducible, and fully automated image analysis workflow designed to carry out this classification. This relies on a two part deep neural network architecture that works in tandem for segmentation of channel images and classification of adhered cells into subtypes. Network training utilized an extensive data set of images generated by the SCD BioChip, a microfluidic assay which injects clinical whole blood samples into protein-functionalized microchannels, mimicking physiological conditions in the microvasculature. Here we carried out the assay with the sub-endothelial protein laminin. The machine learning approach segmented the resulting channel images with 99.1±0.3% mean IoU on the validation set across 5 k-folds, classified detected sRBCs with 96.0±0.3% mean accuracy on the validation set across 5 k-folds, and matched trained personnel in overall characterization of whole channel images with R2 = 0.992, 0.987 and 0.834 for total, deformable and non-deformable sRBC counts respectively. Average analysis time per channel image was also improved by two orders of magnitude (∼ 2 minutes vs ∼ 2-3 hours) over manual characterization. Finally, the network results show an order of magnitude less variance in counts on repeat trials than humans. This kind of standardization is a prerequisite for the viability of any diagnostic technology, making our system suitable for affordable and high throughput disease monitoring.


1981 ◽  
Vol 13 (1) ◽  
pp. 49-55 ◽  
Author(s):  
P.F. Mullaney ◽  
M. Achatz ◽  
G. Seger ◽  
W. Heinze ◽  
F. Sinsel ◽  
...  

2021 ◽  
Author(s):  
Jakob J. Assmann ◽  
Jesper E. Moeslund ◽  
Urs A. Treier ◽  
Signe Normand

Abstract. Biodiversity studies could strongly benefit from three-dimensional data on ecosystem structure derived from contemporary remote sensing technologies, such as Light Detection and Ranging (LiDAR). Despite the increasing availability of such data at regional and national scales, the average ecologist has been limited in accessing them due to high requirements on computing power and remote-sensing knowledge. We processed Denmark's publicly available national Airborne Laser Scanning (ALS) data set acquired in 2014/15 together with the accompanying elevation model to compute 70 rasterized descriptors of interest for ecological studies. With a grain size of 10 m, these data products provide a snapshot of high-resolution measures including vegetation height, structure and density, as well as topographic descriptors including elevation, aspect, slope and wetness across more than forty thousand square kilometres covering almost all of Denmark's terrestrial surface. The resulting data set is comparatively small (~ 87 GB, compressed 16.4 GB) and the raster data can be readily integrated into analytical workflows in software familiar to many ecologists (GIS software, R, Python). Source code and documentation for the processing workflow are openly available via a code repository, allowing for transfer to other ALS data sets, as well as modification or re-calculation of future instances of Denmark’s national ALS data set. We hope that our high-resolution ecological vegetation and terrain descriptors (EcoDes-DK15) will serve as an inspiration for the publication of further such data sets covering other countries and regions and that our rasterized data set will provide a baseline of the ecosystem structure for current and future studies of biodiversity, within Denmark and beyond.


Author(s):  
H. Ancin ◽  
B. Roysam ◽  
M.H. Chestnut ◽  
T.E. Otte ◽  
D.H. Szarowski ◽  
...  

This paper presents recent advances in automated three-dimensional (3-D) image analysis methods for cell counting, and quantitative measurement of various nuclear properties in thick (30-120 μm) tissue sections that are imaged by a laser-scanning confocal microscope. The technical advances reported here are: (i) improved 3-D nuclear separation methods for analyzing samples containing large numbers of nuclei per unit volume, and large connected clusters; (ii) methods for adapting the image analysis system to handle a much larger variety of specimens with greater variability in image parameters, such as intensities, nuclear shapes and sizes; and (iii) methods for assisting a user in selecting parameter inputs to the counting system.Improved 3-D nuclear separation was achieved by computing image gradients from each optical slice. Perona and Malik’s algorithm was used to enhance the image gradient at true nuclear boundaries while suppressing the undesirable intra-nuclear image gradients. The result was used to compute a new proximity index for the partitional cluster analysis method.


2021 ◽  
Vol 13 (3) ◽  
pp. 476
Author(s):  
Barbara D’hont ◽  
Kim Calders ◽  
Harm Bartholomeus ◽  
Tim Whiteside ◽  
Renee Bartolo ◽  
...  

Termite mounds are found over vast areas in northern Australia, delivering essential ecosystem services, such as enhancing nutrient cycling and promoting biodiversity. Currently, the detection of termite mounds over large areas requires airborne laser scanning (ALS) or high-resolution satellite data, which lack precise information on termite mound shape and size. For detailed structural measurements, we generally rely on time-consuming field assessments that can only cover a limited area. In this study, we explore if unmanned aerial vehicle (UAV)-based observations can serve as a precise and scalable tool for termite mound detection and morphological characterisation. We collected a unique data set of terrestrial laser scanning (TLS) and UAV laser scanning (UAV-LS) point clouds of a woodland savanna site in Litchfield National Park (Australia). We developed an algorithm that uses several empirical parameters for the semi-automated detection of termite mounds from UAV-LS and used the TLS data set (1 ha) for benchmarking. We detected 81% and 72% of the termite mounds in the high resolution (1800 points m−2) and low resolution (680 points m−2) UAV-LS data, respectively, resulting in an average detection of eight mounds per hectare. Additionally, we successfully extracted information about mound height and volume from the UAV-LS data. The high resolution data set resulted in more accurate estimates; however, there is a trade-off between area and detectability when choosing the required resolution for termite mound detection Our results indicate that UAV-LS data can be rapidly acquired and used to monitor and map termite mounds over relatively large areas with higher spatial detail compared to airborne and spaceborne remote sensing.


Author(s):  
Haidar Almubarak ◽  
Peng Guo ◽  
R. Joe Stanley ◽  
Rodney Long ◽  
Sameer Antani ◽  
...  

In prior research, the authors introduced an automated, localized, fusion-based approach for classifying squamous epithelium into Normal, CIN1, CIN2, and CIN3 grades of cervical intraepithelial neoplasia (CIN) from digitized histology image analysis. The image analysis approach partitioned the epithelium along the medial axis into ten vertical segments. Texture, cellularity, nuclear characterization and distribution, and acellular features were computed from each vertical segment. The individual vertical segments were CIN classified, and the individual classifications were fused to generate an image-based CIN assessment. In this chapter, image analysis techniques are investigated to improve the execution time of the algorithms and the CIN classification accuracy of the baseline algorithms. For an experimental data set of 117 digitized histology images, execution time for exact grade CIN classification accuracy was improved by 32.32 seconds without loss of exact grade CIN classification accuracy (80.34% vs. 79.49% previously reported) for this same data set.


Sign in / Sign up

Export Citation Format

Share Document