High-speed brightfield 3-D imaging and 3-D image analysis of thick and overlapped cell clusters in cytological preparations

Author(s):  
Robert W. Mackin

This paper presents two advances towards the automated three-dimensional (3-D) analysis of thick and heavily-overlapped regions in cytological preparations such as cervical/vaginal smears. First, a high speed 3-D brightfield microscope has been developed, allowing the acquisition of image data at speeds approaching 30 optical slices per second. Second, algorithms have been developed to detect and segment nuclei in spite of the extremely high image variability and low contrast typical of such regions. The analysis of such regions is inherently a 3-D problem that cannot be solved reliably with conventional 2-D imaging and image analysis methods.High-Speed 3-D imaging of the specimen is accomplished by moving the specimen axially relative to the objective lens of a standard microscope (Zeiss) at a speed of 30 steps per second, where the stepsize is adjustable from 0.2 - 5μm. The specimen is mounted on a computer-controlled, piezoelectric microstage (Burleigh PZS-100, 68/μm displacement). At each step, an optical slice is acquired using a CCD camera (SONY XC-11/71 IP, Dalsa CA-D1-0256, and CA-D2-0512 have been used) connected to a 4-node array processor system based on the Intel i860 chip.

1984 ◽  
Vol 247 (3) ◽  
pp. E412-E419 ◽  
Author(s):  
L. S. Hibbard ◽  
R. A. Hawkins

Quantitative autoradiography is a powerful method for studying brain function by the determination of blood flow, glucose utilization, or transport of essential nutrients. Autoradiographic images contain vast amounts of potentially useful information, but conventional analyses can practically sample the data at only a small number of points arbitrarily chosen by the experimenter to represent discrete brain structures. To use image data more fully, computer methods for its acquisition, storage, quantitative analysis, and display are required. We have developed a system of computer programs that performs these tasks and has the following features: 1) editing and analysis of single images using interactive graphics, 2) an automatic image alignment algorithm that places images in register with one another using only the mathematical properties of the images themselves, 3) the calculation of mean images from equivalent images in different experimental serial image sets, 4) the calculation of difference images (e.g., experiment-minus-control) with the option to display only differences estimated to be statistically significant, and 5) the display of serial image metabolic maps reconstructed in three dimensions using a high-speed computer graphics system.


2013 ◽  
Vol 325-326 ◽  
pp. 1571-1575
Author(s):  
Fang Wang ◽  
Zong Wei Yang ◽  
De Ren Kong ◽  
Yun Fei Jia

Shadowgraph is an important method to obtain the flight characteristics of high-speed object, such as attitude and speed etc. To get the contour information of objects and coordinates of feature points from shadowgraph are the precondition of characteristics analysis. Current digital shadowgraph system composed of CCD camera and pulsed laser source is widely used, but still lack of the corresponding method in image processing. Therefore, the selection of an effective processing method in order to ensure high effectiveness and accuracy of image data interpretation is an urgent need to be solved. According to the features of shadowgraph, a processing method to realize the contour extraction of high-speed object by adaptive threshold segmentation is proposed based on median filtering in this paper, and verified with the OpenCV in VC environment, the identification process of the feature points are recognized. The result indicates that by using this method, contours of high-speed objects can be detected nicely, to combine relevant algorithm, the pixel coordinates of feature points such as the center of mass can be recognized accurately.


Author(s):  
Mohamed B. Trabia ◽  
William Culbreth ◽  
Satishkumar Subramanian ◽  
Tsuyoshi Tajima

Superconducting niobium cavities are important components of linear accelerators. Buffered chemical polishing (bcp) on the inner surface of the cavity is a standard procedure to improve its performance. The quality of bcp, however, has not been optimized well in terms of the uniformity of surface smoothness. A finite element computational fluid dynamics (cfd) model was developed to simulate the chemical etching process inside the cavity. The analysis confirmed the observation of other researchers that the sections closer to the axis of the cavity received more etching than other regions. A baffle was used by lanl personnel to direct the flow of the etching fluid toward the walls of the cavity. A new baffle design was tined using optimization techniques. The redesigned baffle significantly improves the performance of the etching process. To verify these results an experimental setup for flow visualization was created. The setup consists of a high speed, high resolution ccd camera. The camera is positioned by a computer-controlled traversing mechanism. A dye injecting arrangement is used for tracking the fluid path. Experimental results are in general agreement with computational findings.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Michael N Economo ◽  
Nathan G Clack ◽  
Luke D Lavis ◽  
Charles R Gerfen ◽  
Karel Svoboda ◽  
...  

The structure of axonal arbors controls how signals from individual neurons are routed within the mammalian brain. However, the arbors of very few long-range projection neurons have been reconstructed in their entirety, as axons with diameters as small as 100 nm arborize in target regions dispersed over many millimeters of tissue. We introduce a platform for high-resolution, three-dimensional fluorescence imaging of complete tissue volumes that enables the visualization and reconstruction of long-range axonal arbors. This platform relies on a high-speed two-photon microscope integrated with a tissue vibratome and a suite of computational tools for large-scale image data. We demonstrate the power of this approach by reconstructing the axonal arbors of multiple neurons in the motor cortex across a single mouse brain.


Author(s):  
A.F. de Jong ◽  
H. Coppoolse ◽  
U. Lücken ◽  
M.K. Kundmann ◽  
A.J. Gubbens ◽  
...  

Energy-filtered transmission electron microscopy (EFTEM) has many uses in life sciences1. These include improved contrast for imaging unstained, cryo or thick samples; improved diffraction for electron crystallography, and elemental mapping and analysis. We have developed a new system for biological EFTEM that combines advanced electron-optical performance with a high degree of automation. The system is based on the Philips CM series of microscopes and the Gatan post-column imaging filter (GIF). One combination particulary suited for the life sciences is that of the CM 120-BioTWIN and the GIF100: the CM120-BioFilter. The CM 120-BioTWIN is equipped with a high-contrast objective lens for biological samples. Its specially designed cold-trap, together with low-dose software, supports full cryo-microscopy. The GIF 100 is corrected for second-order aberrations in both images and spectra. It produces images that are isochromatic to within 1.5 eV at 120 keV and distorted by less than 2% over lk x lk pixels. All the elements of the filter are computer controlled. Images and spectra are detected by a TV camera or a multi-scan CCD camera, both of which are incorporated in the filter. All filter and camera functions are controlled from Digital Micrograph running on an Apple Power Macintosh.


1999 ◽  
Vol 5 (S2) ◽  
pp. 524-525
Author(s):  
B. Roysam ◽  
A. Can ◽  
H. Shen ◽  
K. Al-Kofahi ◽  
J.N. Turner

This presentation will describe a common core set of widely applicable image analysis techniques for automated quantitative analysis of volumetric microscope image data. Volumetric (as distinct from stereoscopic) three-dimensional (3-D) Microscopy is a rapidly maturing field offering the ability to image thick (compared to the depth of field) specimens using a variety of instrumentation techniques, and producing arrays of brightness values in three spatial dimensions. Also well developed are methods to correct the acquired images for a variety of physical effects including blur and attenuation.Commonly, what is of interest is the best-possible visualization of thick specimens. The next step, increasingly being considered in view of growing computational resources, and progress in image analysis techniques, seeks to quantify many of the processes and effects being studied. In some mainstream fields, such quantitation is essential. For instance, various assays for substance testing in pharmaceutical and chemical industries involve quantitative end points. As an illustration, the Draize assay for ocular irritancy testing of drugs and biochemical products for human use requires counting of live and dead cells that stain differently. Another example is the mouse lymphoma test that requires a 3-D counting of bacterial colonies. Neurobiological assays require morphometry, as well as quantification of changes in neurons as a function of time and various applied stimuli such as drugs, heat, and radiation. Angiogenesis assays require quantification of changes in vascular morphometry. Computerized image analysis is a powerful tool for extracting quantitative data from 3-D images for statistical analysis.


2011 ◽  
Vol 71-78 ◽  
pp. 4269-4273
Author(s):  
Jie Jia ◽  
Jian Yong Lai ◽  
Gen Hua Zhang ◽  
Huan Ling

To deal with the large amount of data and complex computing problems during high-speed image acquisition, the image acquisition and preprocess system based on FPGA is designed in the paper. In order to obtain continuous and integrity of image data streams, the design has completed the acquisition of CCD camera video signal and implementation of de-interlacing ping-pong cache. The fast median filtering algorithm is used for image preprocessing, and finally the preprocessed image data is displayed on CRT. Experiments indicate that the design meets requirements of image sample quality and balances the real-time demand.


1979 ◽  
Vol 27 (1) ◽  
pp. 604-612 ◽  
Author(s):  
W Abmayr ◽  
G Burger ◽  
H J Soost

Two methods for high resolution cell image data acquisition are applied routinely. Cells are either scanned by a computer controlled fast scanning microscope photometer (SMP) or a TV-camera. The software system for digital image analysis was completely revised and implemented on the PR 330 minicomputer. The system contains codes for primary cell data acquisition, segmentation of cells, cell feature extraction and statistical cell analysis. With this system, SMP and TV scanned cell data bases of PAP stained cells in vaginal smears, grouped into several classes, have been built up. Each data base contains 34 primary features and 20 feature combinations for each cell. A linear discriminant analysis is applied routinely for cell classification. The present state of the system and its operation are described, cell features and classification results are shown, and future steps for a prescreening strategy are discussed.


2016 ◽  
Author(s):  
Toshifumi Mukunoki ◽  
Yoshihisa Miyata ◽  
Kazuaki Mikami ◽  
Erika Shiota

Abstract. The development of a micro-focused X-ray CT device enables digital imaging analysis at the pore-scale. The applications have been diverse, for instance, in soil mechanics, geotechnical and geoenvironmental engineering, petroleum engineering, and agricultural engineering. In particular, imaging of the pore space of porous media has contributed to numerical simulations for single and multi-phase flow, or contaminant transport, through the pore structure as three-dimensional image data. These obtained results are affected by the pore diameter so it is necessary to verify the image pre-processing for image analysis, and validate the pore diameters obtained from the CT image data. Besides, it is meaningful to produce the parameters in a representative element volume (REV) and significant to define the dimension of REV. This paper describes the underlying method of image processing and analysis and discusses the physical properties of Toyoura sand for the verification of image analysis based on the definition of REV. Based on the obtained verification results, pore diameter analysis can be conducted and validated by the comparison of the experimental work and image analysis. The pore diameter was deduced by Laplace’s law and the water retentively test for the drainage process. The referenced result sand perforated pore diameter proposed originally in this study, called the voxel-percolation method (VPM), are compared in this paper. The paper describes the limitation of REV, the definition of pore diameter, and the effectiveness of VPM for the assessment of pore diameter.


Sign in / Sign up

Export Citation Format

Share Document