scholarly journals Large-scale dendritic spine extraction and analysis through petascale computing

2021 ◽  
Author(s):  
Gregg Wildenberg ◽  
Hanyu Li ◽  
Griffin A Badalamente ◽  
Thomas D. Uram ◽  
Nicola Ferrier ◽  
...  

The synapse is a central player in the nervous system serving as the key structure that permits the relay of electrical and chemical signals from one neuron to another. The anatomy of the synapse contains important information about the signals and the strength of signal it transmits. Because of their small size, however, electron microscopy (EM) is the only method capable of directly visualizing synapse morphology and remains the gold standard for studying synapse morphology. Historically, EM has been limited to small fields of view and often only in 2D, but recent advances in automated serial EM (i.e. connectomics) have enabled collecting large EM volumes that capture significant fractions of neurons and the different classes of synapses they receive (i.e. shaft, spine, soma, axon). However, even with recent advances in automatic segmentation methods, extracting neuronal and synaptic profiles from these connectomics datasets are difficult to scale over large EM volumes. Without methods that speed up automatic segmentation over large volumes, the full potential of utilizing these new EM methods to advance studies related to synapse morphologies will never be fully realized. To solve this problem, we describe our work to leverage Argonne leadership-scale supercomputers for segmentation of a 0.6 terabyte dataset using state of the art machine learning-based segmentation methods on a significant fraction of the 11.69 petaFLOPs supercomputer Theta at Argonne National Laboratory. We describe an iterative pipeline that couples human and machine feedback to produce accurate segmentation results in time frames that will make connectomics a more routine method for exploring how synapse biology changes across a number of biological conditions. Finally, we demonstrate how dendritic spines can be algorithmically extracted from the segmentation dataset for analysis of spine morphologies. Advancing this effort at large compute scale is expected to yield benefits in turnaround time for segmentation of individual datasets, accelerating the path to biology results and providing population-level insight into how thousands of synapses originate from different neurons; we expect to also reap benefits in terms of greater accuracy from the more compute-intensive algorithms these systems enable.

Author(s):  
Gengbin Zheng ◽  
Abhinav Bhatelé ◽  
Esteban Meneses ◽  
Laxmikant V. Kalé

Large parallel machines with hundreds of thousands of processors are becoming more prevalent. Ensuring good load balance is critical for scaling certain classes of parallel applications on even thousands of processors. Centralized load balancing algorithms suffer from scalability problems, especially on machines with a relatively small amount of memory. Fully distributed load balancing algorithms, on the other hand, tend to take longer to arrive at good solutions. In this paper, we present an automatic dynamic hierarchical load balancing method that overcomes the scalability challenges of centralized schemes and longer running times of traditional distributed schemes. Our solution overcomes these issues by creating multiple levels of load balancing domains which form a tree. This hierarchical method is demonstrated within a measurement-based load balancing framework in Charm++. We discuss techniques to deal with scalability challenges of load balancing at very large scale. We present performance data of the hierarchical load balancing method on up to 16,384 cores of Ranger (at the Texas Advanced Computing Center) and 65,536 cores of Intrepid (the Blue Gene/P at Argonne National Laboratory) for a synthetic benchmark. We also demonstrate the successful deployment of the method in a scientific application, NAMD, with results on Intrepid.


2020 ◽  
Vol 2020 ◽  
pp. 1-11 ◽  
Author(s):  
Sibo Li ◽  
Roberto Paoli ◽  
Michael D’Mello

Compressible density-based solvers are widely used in OpenFOAM, and the parallel scalability of these solvers is crucial for large-scale simulations. In this paper, we report our experiences with the scalability of OpenFOAM’s native rhoCentralFoam solver, and by making a small number of modifications to it, we show the degree to which the scalability of the solver can be improved. The main modification made is to replace the first-order accurate Euler scheme in rhoCentralFoam with a third-order accurate, four-stage Runge-Kutta or RK4 scheme for the time integration. The scaling test we used is the transonic flow over the ONERA M6 wing. This is a common validation test for compressible flows solvers in aerospace and other engineering applications. Numerical experiments show that our modified solver, referred to as rhoCentralRK4Foam, for the same spatial discretization, achieves as much as a 123.2% improvement in scalability over the rhoCentralFoam solver. As expected, the better time resolution of the Runge–Kutta scheme makes it more suitable for unsteady problems such as the Taylor–Green vortex decay where the new solver showed a 50% decrease in the overall time-to-solution compared to rhoCentralFoam to get to the final solution with the same numerical accuracy. Finally, the improved scalability can be traced to the improvement of the computation to communication ratio obtained by substituting the RK4 scheme in place of the Euler scheme. All numerical tests were conducted on a Cray XC40 parallel system, Theta, at Argonne National Laboratory.


2014 ◽  
Vol 3 (3) ◽  
Author(s):  
Gary J. Hill

AbstractAs telescope apertures increase, the challenge of scaling spectrographic astronomical instruments becomes acute. The next generation of extremely large telescopes (ELTs) strain the availability of glass blanks for optics and engineering to provide sufficient mechanical stability. While breaking the relationship between telescope diameter and instrument pupil size by adaptive optics is a clear path for small fields of view, survey instruments exploiting multiplex advantages will be pressed to find cost-effective solutions. In this review we argue that exploiting the full potential of ELTs will require the barrier of the cost and engineering difficulty of monolithic instruments to be broken by the use of large-scale replication of spectrographs. The first steps in this direction have already been taken with the soon to be commissioned MUSE and VIRUS instruments for the Very Large Telescope and the Hobby-Eberly Telescope, respectively. MUSE employs 24 spectrograph channels, while VIRUS has 150 channels. We compare the information gathering power of these replicated instruments with the present state of the art in more traditional spectrographs, and with instruments under development for ELTs. Design principles for replication are explored along with lessons learned, and we look forward to future technologies that could make massively-replicated instruments even more compelling.


2011 ◽  
Vol 21 (01) ◽  
pp. 45-60 ◽  
Author(s):  
PAVAN BALAJI ◽  
DARIUS BUNTINAS ◽  
DAVID GOODELL ◽  
WILLIAM GROPP ◽  
TORSTEN HOEFLER ◽  
...  

Petascale parallel computers with more than a million processing cores are expected to be available in a couple of years. Although MPI is the dominant programming interface today for large-scale systems that at the highest end already have close to 300,000 processors, a challenging question to both researchers and users is whether MPI will scale to processor and core counts in the millions. In this paper, we examine the issue of scalability of MPI to very large systems. We first examine the MPI specification itself and discuss areas with scalability concerns and how they can be overcome. We then investigate issues that an MPI implementation must address in order to be scalable. To illustrate the issues, we ran a number of simple experiments to measure MPI memory consumption at scale up to 131,072 processes, or 80%, of the IBM Blue Gene/P system at Argonne National Laboratory. Based on the results, we identified nonscalable aspects of the MPI implementation and found ways to tune it to reduce its memory footprint. We also briefly discuss issues in application scalability to large process counts and features of MPI that enable the use of other techniques to alleviate scalability limitations in applications.


The single most important thing we can do as a society to positively transform the lives of children and prevent social, emotional, and behavioral problems and child maltreatment is to increase the knowledge, skills, and confidence of parents in the task of raising children at a whole-of-population level. This book provides an in-depth description of a comprehensive population-based approach to enhancing competent parenting known as the Triple P—Positive Parenting Program. Delivered as a multilevel system of intervention within a public health framework, Triple P represents a paradigm shift in how parenting support is provided. The Power of Positive Parenting is structured in eight sections that address every aspect of the Triple P system, including (a) the foundations and an overview of the approach; (b) how the system can be applied to a diverse range of child presentations; (c) the theoretical and practical issues involved in working with different types of parents and caregivers; (d) the importance of, and how parenting support can be provided in, a range of delivery contexts; (e) how the system can respond to and embrace cultural diversity of families everywhere; (f) the strategies needed to make large-scale, population-level implementation of the system succeed; (g) lessons learned from real-world applications of the full multilevel approach to parenting support at a population level; and (h) future directions and how further program development and innovation can be supported for this approach to reach its full potential in positively transforming the lives of all children, parents, and communities.


Author(s):  
Yuepeng Zhang ◽  
Lixuan Lu ◽  
Greg F. Naterer

Hydrogen is a clean fuel that can help to reduce greenhouse gas emissions, as its oxidation does not emit carbon dioxide (a primary greenhouse gas). Generation of hydrogen has attracted much recent worldwide attention. A promising method to generate hydrogen is to use heat from nuclear power plants. The advantages of using nuclear heat are capabilities of large-scale generation of hydrogen and zero greenhouse gas emissions. Nuclear energy is expected to have an important role for hydrogen generation in the future. In this paper, reliability and probabilistic safety assessments of a conceptual nuclear-hydrogen plant will be analyzed. There are two main methods to generate hydrogen from nuclear energy. They include: 1) thermochemical processes and 2) electrochemical processes. The conceptual plant of this paper is based on a Cu-Cl thermocycle developed by Atomic Energy of Canada Limited (AECL) and the Argonne National Laboratory (ANL). Using a flowsheet of the hydrogen plant created by an Aspen Plus simulation by ANL, four fault-trees are constructed for potential risk scenarios. Based on the results from the fault tree analyses (FTA), the risk levels of the hydrogen generation plant under different accident scenarios can be calculated. Based on the results, potential problems encountered in Cu-Cl cycle are identified and possible solutions will be recommended for future improvements.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0231511
Author(s):  
Alia Zander ◽  
Tatjana Paunesku ◽  
Gayle E. Woloschak

The Department of Energy conduced ten large-scale neutron irradiation experiments at Argonne National Laboratory between 1972 and 1989. Using a new approach to utilize experimental controls to determine whether a cross comparison between experiments was appropriate, we amalgamated data on neutron exposures to discover that fractionation significantly improved overall survival. A more detailed investigation showed that fractionation only had a significant impact on the death hazard for animals that died from solid tumors, but did not significantly impact any other causes of death. Additionally, we compared the effects of sex, age first irradiated, and radiation fractionation on neutron irradiated mice versus cobalt 60 gamma irradiated mice and found that solid tumors were the most common cause of death in neutron irradiated mice, while lymphomas were the dominant cause of death in gamma irradiated mice. Most animals in this study were irradiated before 150 days of age but a subset of mice was first exposed to gamma or neutron irradiation over 500 days of age. Advanced age played a significant role in decreasing the death hazard for neutron irradiated mice, but not for gamma irradiated mice. Mice that were 500 days old before their first exposures to neutrons began dying later than both sham irradiated or gamma irradiated mice.


2019 ◽  
Vol 21 (1) ◽  
pp. 226-238 ◽  
Author(s):  
Zongyu Yue ◽  
Michele Battistoni ◽  
Sibendu Som

This article presents a computational fluid dynamics study of the engine combustion network Spray G, focusing on the transient characteristics of the spray during the start of injection and the impacts of nozzle geometry details derived from the manufacturing process. The large-eddy-simulation method, coupled with the volume-of-fluid method, was used to model the high-speed turbulent two-phase flow. A moving-needle boundary condition was applied to capture the internal flow boundary condition accurately. The injector geometry was measured with micron-level resolution using X-ray tomographic imaging at the Advanced Photon Source at Argonne National Laboratory, providing detailed machining tolerance and defects from manufacturing and a realistic rough surface. For comparison, a nominal geometry and a modified geometry incorporating measured large-scale geometric features but no surface details were also used in the simulations. Spray characteristics such as mass flow rate, injection velocity, and Sauter mean diameter were analyzed. Significantly distinct spray characteristics in terms of injection velocity, spray morphology, and primary breakup mechanism were predicted using different nozzle geometries, which is mainly attributable to the realistic surface finish and manufacturing defects. The measured high-resolution geometry predicts a lower injection velocity, a wider-spreading spray, and an overall slower breakup rate with evident injector tip wetting compared to the ideally smooth nozzle boundary. This result implies that the manufacturing details of the injector, which are usually ignored in fuel injection studies, have a significant impact on the spray development process and should be taken into account for design optimization.


2020 ◽  
Author(s):  
Alia Zander ◽  
Tatjana Paunesku ◽  
Gayle Woloschak

ABSTRACTThe Department of Energy conduced ten large-scale neutron irradiation experiments at Argonne National Laboratory between 1972 and 1989. Using a new approach to utilize experimental controls to determine whether a cross comparison between experiments was appropriate, we amalgamated data on neutron exposures to discover that fractionation significantly improved overall survival. A more detailed investigation showed that fractionation only had a significant impact on the death hazard for animals that died from solid tumors, but did not significantly impact any other causes of death. Additionally, we compared the effects of sex, age first irradiated, and radiation fractionation on neutron irradiated mice versus cobalt 60 gamma irradiated mice and found that solid tumors were the most common cause of death in neutron irradiated mice, while lymphomas were the dominant cause of death in gamma irradiated mice. Most animals in this study were irradiated before 150 days of age but a subset of mice was first exposed to gamma or neutron irradiation over 500 days of age. Advanced age played a significant role in decreasing the death hazard for neutron irradiated mice, but not for gamma irradiated mice. Mice that were 500 days old before their first exposures to neutrons began dying later than both sham irradiated or gamma irradiated mice.


Author(s):  
Charles W. Allen ◽  
Robert C. Birtcher

The uranium silicides, including U3Si, are under study as candidate low enrichment nuclear fuels. Ion beam simulations of the in-reactor behavior of such materials are performed because a similar damage structure can be produced in hours by energetic heavy ions which requires years in actual reactor tests. This contribution treats one aspect of the microstructural behavior of U3Si under high energy electron irradiation and low dose energetic heavy ion irradiation and is based on in situ experiments, performed at the HVEM-Tandem User Facility at Argonne National Laboratory. This Facility interfaces a 2 MV Tandem ion accelerator and a 0.6 MV ion implanter to a 1.2 MeV AEI high voltage electron microscope, which allows a wide variety of in situ ion beam experiments to be performed with simultaneous irradiation and electron microscopy or diffraction.At elevated temperatures, U3Si exhibits the ordered AuCu3 structure. On cooling below 1058 K, the intermetallic transforms, evidently martensitically, to a body-centered tetragonal structure (alternatively, the structure may be described as face-centered tetragonal, which would be fcc except for a 1 pet tetragonal distortion). Mechanical twinning accompanies the transformation; however, diferences between electron diffraction patterns from twinned and non-twinned martensite plates could not be distinguished.


Sign in / Sign up

Export Citation Format

Share Document