scholarly journals Multithreaded two-pass connected components labelling and particle analysis in ImageJ

2021 ◽  
Vol 8 (3) ◽  
Author(s):  
Michael Doube

Sequential region labelling, also known as connected components labelling, is a standard image segmentation problem that joins contiguous foreground pixels into blobs. Despite its long development history and widespread use across diverse domains such as bone biology, materials science and geology, connected components labelling can still form a bottleneck in image processing pipelines. Here, I describe a multithreaded implementation of classical two-pass sequential region labelling and introduce an efficient collision resolution step, ‘ bucket fountain’ . Code was validated on test images and against commercial software (Avizo). It was performance tested on images from 2 MB (161 particles) to 6.5 GB (437 508 particles) to determine whether theoretical linear scaling ( O (n)) had been achieved, and on 1–40 CPU threads to measure speed improvements due to multithreading. The new implementation achieves linear scaling ( b = 0.905–1.052, time ∝ pixels b ; R 2 = 0.985–0.996), which improves with increasing thread number up to 8–16 threads, suggesting that it is memory bandwidth limited. This new implementation of sequential region labelling reduces the time required from hours to a few tens of seconds for images of several GB, and is limited only by hardware scale. It is available open source and free of charge in BoneJ.

2020 ◽  
Author(s):  
Michael Doube

AbstractSequential region labelling, also known as connected components labelling, is a standard image segmentation problem that joins contiguous foreground pixels into blobs. Despite its long development history and widespread use across diverse domains such as bone biology, materials science, and geology, connected components labelling can still form a bottleneck in image processing pipelines. Here, I describe a multithreaded implementation of classical two-pass sequential region labelling and introduce an efficient collision resolution step, ‘bucket fountain’. Code was validated on test images and against commercial software (Avizo). It was performance tested on images from 2 MB (161 particles) to 6.5 GB (437,508 particles) to determine whether theoretical linear scaling (O(n)) had been achieved, and on 1 – 40 CPU threads to measure speed improvements due to multithreading. The new implementation achieves linear scaling (b = 0.905 – 1.052, time ∝ pixelsb; R2 = 0.985 – 0.996), which improves with increasing thread number up to 8-10 threads, suggesting that it is memory bandwidth limited. This new implementation of sequential region labelling reduces the time required from hours to a few tens of seconds for images of several GB, and is limited only by hardware scale. It is available open source and free of charge in BoneJ.


Author(s):  
Javier Prades ◽  
Baldomero Imbernón ◽  
Carlos Reaño ◽  
Jorge Peña-García ◽  
Jose Pedro Cerón-Carrasco ◽  
...  

The full-understanding of the dynamics of molecular systems at the atomic scale is of great relevance in the fields of chemistry, physics, materials science, and drug discovery just to name a few. Molecular dynamics (MD) is a widely used computer tool for simulating the dynamical behavior of molecules. However, the computational horsepower required by MD simulations is too high to obtain conclusive results in real-world scenarios. This is mainly motivated by two factors: (1) the long execution time required by each MD simulation (usually in the nanoseconds and microseconds scale, and beyond) and (2) the large number of simulations required in drug discovery to study the interactions between a large library of compounds and a given protein target. To deal with the former, graphics processing units (GPUs) have come up into the scene. The latter has been traditionally approached by launching large amounts of simulations in computing clusters that may contain several GPUs on each node. However, GPUs are targeted as a single node that only runs one MD instance at a time, which translates into low GPU occupancy ratios and therefore low throughput. In this work, we propose a strategy to increase the overall throughput of MD simulations by increasing the GPU occupancy through virtualized GPUs. We use the remote CUDA (rCUDA) middleware as a tool to decouple GPUs from CPUs, and thus enabling multi-tenancy of the virtual GPUs. As a working test in the drug discovery field, we studied the binding process of a novel flavonol to DNA with the GROningen MAchine for Chemical Simulations (GROMACS) MD package. Our results show that the use of rCUDA provides with a 1.21× speed-up factor compared to the CUDA counterpart version while requiring a similar power budget.


MRS Bulletin ◽  
1996 ◽  
Vol 21 (2) ◽  
pp. 17-19 ◽  
Author(s):  
Arthur F. Voter

Atomistic simulations are playing an increasingly prominent role in materials science. From relatively conventional studies of point and planar defects to large-scale simulations of fracture and machining, atomistic simulations offer a microscopic view of the physics that cannot be obtained from experiment. Predictions resulting from this atomic-level understanding are proving increasingly accurate and useful. Consequently, the field of atomistic simulation is gaining ground as an indispensable partner in materials research, a trend that can only continue. Each year, computers gain roughly a factor of two in speed. With the same effort one can then simulate a system with twice as many atoms or integrate a molecular-dynamics trajectory for twice as long. Perhaps even more important, however, are the theoretical advances occurring in the description of the atomic interactions, the so-called “interatomic potential” function.The interatomic potential underpins any atomistic simulation. The accuracy of the potential dictates the quality of the simulation results, and its functional complexity determines the amount of computer time required. Recent developments that fit more physics into a compact potential form are increasing the accuracy available per simulation dollar.This issue of MRS Bulletin offers an introductory survey of interatomic potentials in use today, as well as the types of problems to which they can be applied. This is by no means a comprehensive review. It would be impractical here to attempt to present all the potentials that have been developed in recent years. Rather, this collection of articles focuses on a few important forms of potential spanning the major classes of materials bonding: covalent, metallic, and ionic.


2011 ◽  
Vol 20 (3) ◽  
pp. 163 ◽  
Author(s):  
Charles Kervrann

Bayesian statistical theory is a convenient way of taking a priori information into consideration when inference is made from images. In Bayesian image segmentation, the a priori distribution should capture the knowledge about objects. Taking inspiration from (Alvarez et al., 1999), we design a prior density that penalizes the area of homogeneous parts in images. The segmentation problem is further formulated as the estimation of the set of curves that maximizes the posterior distribution. In this paper, we explore a posterior distribution model for which its maximal mode is given by a subset of level curves, that is the boundaries of image level sets. For the completeness of the paper, we present a stepwise greedy algorithm for computing partitions with connected components.


2014 ◽  
Vol 1 (4) ◽  
pp. 604-617 ◽  
Author(s):  
Lin-Wang Wang

Abstract Recent developments in large-scale materials science simulations, especially under the divide-and-conquer method, are reviewed. The pros and cons of the divide-and-conquer method are discussed. It is argued that the divide-and-conquer method, such as the linear-scaling 3D fragment method, is an ideal approach to take advantage of the heterogeneous architectures of modern-day supercomputers despite their relatively large prefactors among linear-scaling methods. Some developments in graphics processing unit (GPU) electronic structure calculations are also reviewed. The accelerators like GPU could be an essential part for the future exascale supercomputing.


Author(s):  
Charles TurnbiLL ◽  
Delbert E. Philpott

The advent of the scanning electron microscope (SCEM) has renewed interest in preparing specimens by avoiding the forces of surface tension. The present method of freeze drying by Boyde and Barger (1969) and Small and Marszalek (1969) does prevent surface tension but ice crystal formation and time required for pumping out the specimen to dryness has discouraged us. We believe an attractive alternative to freeze drying is the critical point method originated by Anderson (1951; for electron microscopy. He avoided surface tension effects during drying by first exchanging the specimen water with alcohol, amy L acetate and then with carbon dioxide. He then selected a specific temperature (36.5°C) and pressure (72 Atm.) at which carbon dioxide would pass from the liquid to the gaseous phase without the effect of surface tension This combination of temperature and, pressure is known as the "critical point" of the Liquid.


Author(s):  
C. Colliex ◽  
P. Trebbia

The physical foundations for the use of electron energy loss spectroscopy towards analytical purposes, seem now rather well established and have been extensively discussed through recent publications. In this brief review we intend only to mention most recent developments in this field, which became available to our knowledge. We derive also some lines of discussion to define more clearly the limits of this analytical technique in materials science problems.The spectral information carried in both low ( 0<ΔE<100eV ) and high ( >100eV ) energy regions of the loss spectrum, is capable to provide quantitative results. Spectrometers have therefore been designed to work with all kinds of electron microscopes and to cover large energy ranges for the detection of inelastically scattered electrons (for instance the L-edge of molybdenum at 2500eV has been measured by van Zuylen with primary electrons of 80 kV). It is rather easy to fix a post-specimen magnetic optics on a STEM, but Crewe has recently underlined that great care should be devoted to optimize the collecting power and the energy resolution of the whole system.


Author(s):  
Hannes Lichte ◽  
Edgar Voelkl

The object wave o(x,y) = a(x,y)exp(iφ(x,y)) at the exit face of the specimen is described by two real functions, i.e. amplitude a(x,y) and phase φ(x,y). In stead of o(x,y), however, in conventional transmission electron microscopy one records only the real intensity I(x,y) of the image wave b(x,y) loosing the image phase. In addition, referred to the object wave, b(x,y) is heavily distorted by the aberrations of the microscope giving rise to loss of resolution. Dealing with strong objects, a unique interpretation of the micrograph in terms of amplitude and phase of the object is not possible. According to Gabor, holography helps in that it records the image wave completely by both amplitude and phase. Subsequently, by means of a numerical reconstruction procedure, b(x,y) is deconvoluted from aberrations to retrieve o(x,y). Likewise, the Fourier spectrum of the object wave is at hand. Without the restrictions sketched above, the investigation of the object can be performed by different reconstruction procedures on one hologram. The holograms were taken by means of a Philips EM420-FEG with an electron biprism at 100 kV.


Author(s):  
J. R. Porter ◽  
J. I. Goldstein ◽  
D. B. Williams

Alloy scrap metal is increasingly being used in electric arc furnace (EAF) steelmaking and the alloying elements are also found in the resulting dust. A comprehensive characterization program of EAF dust has been undertaken in collaboration with the steel industry and AISI. Samples have been collected from the furnaces of 28 steel companies representing the broad spectrum of industry practice. The program aims to develop an understanding of the mechanisms of formation so that procedures to recover residual elements or recycle the dust can be established. The multi-phase, multi-component dust particles are amenable to individual particle analysis using modern analytical electron microscopy (AEM) methods.Particles are ultrasonically dispersed and subsequently supported on carbon coated formvar films on berylium grids for microscopy. The specimens require careful treatment to prevent agglomeration during preparation which occurs as a result of the combined effects of the fine particle size and particle magnetism. A number of approaches to inhibit agglomeration are currently being evaluated including dispersal in easily sublimable organic solids and size fractioning by centrifugation.


Author(s):  
J.C.H. Spence ◽  
J. Mayer

The Zeiss 912 is a new fully digital, side-entry, 120 Kv TEM/STEM instrument for materials science, fitted with an omega magnetic imaging energy filter. Pumping is by turbopump and ion pump. The magnetic imaging filter allows energy-filtered images or diffraction patterns to be recorded without scanning using efficient parallel (area) detection. The energy loss intensity distribution may also be displayed on the screen, and recorded by scanning it over the PMT supplied. If a CCD camera is fitted and suitable new software developed, “parallel ELS” recording results. For large fields of view, filtered images can be recorded much more efficiently than by Scanning Reflection Electron Microscopy, and the large background of inelastic scattering removed. We have therefore evaluated the 912 for REM and RHEED applications. Causes of streaking and resonance in RHEED patterns are being studied, and a more quantitative analysis of CBRED patterns may be possible. Dark field band-gap REM imaging of surface states may also be possible.


Sign in / Sign up

Export Citation Format

Share Document