scholarly journals Texas Advanced Computing Center

2014 ◽  
pp. 121-130
Author(s):  
Gengbin Zheng ◽  
Abhinav Bhatelé ◽  
Esteban Meneses ◽  
Laxmikant V. Kalé

Large parallel machines with hundreds of thousands of processors are becoming more prevalent. Ensuring good load balance is critical for scaling certain classes of parallel applications on even thousands of processors. Centralized load balancing algorithms suffer from scalability problems, especially on machines with a relatively small amount of memory. Fully distributed load balancing algorithms, on the other hand, tend to take longer to arrive at good solutions. In this paper, we present an automatic dynamic hierarchical load balancing method that overcomes the scalability challenges of centralized schemes and longer running times of traditional distributed schemes. Our solution overcomes these issues by creating multiple levels of load balancing domains which form a tree. This hierarchical method is demonstrated within a measurement-based load balancing framework in Charm++. We discuss techniques to deal with scalability challenges of load balancing at very large scale. We present performance data of the hierarchical load balancing method on up to 16,384 cores of Ranger (at the Texas Advanced Computing Center) and 65,536 cores of Intrepid (the Blue Gene/P at Argonne National Laboratory) for a synthetic benchmark. We also demonstrate the successful deployment of the method in a scientific application, NAMD, with results on Intrepid.


2015 ◽  
Vol 18 (3) ◽  
pp. 808-830 ◽  
Author(s):  
Dhairya Malhotra ◽  
George Biros

AbstractWe describe our implementation of a parallel fast multipole method for evaluating potentials for discrete and continuous source distributions. The first requires summation over the source points and the second requiring integration over a continuous source density. Both problems require (N2) complexity when computed directly; however, can be accelerated to (N) time using FMM. In our PVFMM software library, we use kernel independent FMM and this allows us to compute potentials for a wide range of elliptic kernels. Our method is high order, adaptive and scalable. In this paper, we discuss several algorithmic improvements and performance optimizations including cache locality, vectorization, shared memory parallelism and use of coprocessors. Our distributed memory implementation uses space-filling curve for partitioning data and a hypercube communication scheme. We present convergence results for Laplace, Stokes and Helmholtz (low wavenumber) kernels for both particle and volume FMM. We measure efficiency of our method in terms of CPU cycles per unknown for different accuracies and different kernels. We also demonstrate scalability of our implementation up to several thousand processor cores on the Stampede platform at the Texas Advanced Computing Center.


2011 ◽  
Vol 6 (2) ◽  
pp. 253-264
Author(s):  
David Walling ◽  
Maria Esteva

The Texas Advanced Computing Center and the Institute for Classical Archaeology at the University of Texas at Austin developed a method that uses iRods rules and a Jython script to automate the extraction of metadata from digital archaeological data. The first step was to create a record-keeping system to classify the data. The record-keeping system employs file and directory hierarchy naming conventions designed specifically to maintain the relationship between the data objects and map the archaeological documentation process. The metadata implicit in the record-keeping system is automatically extracted upon ingest, combined with additional sources of metadata, and stored alongside the data in the iRods preservation environment. This method enables a more organized workflow for the researchers, helps them archive their data close to the moment of data creation, and avoids error prone manual metadata input. We describe the types of metadata extracted and provide technical details of the extraction process and storage of the data and metadata.


2018 ◽  
Vol 2 ◽  
pp. e25525
Author(s):  
Teresa Mayfield ◽  
Mariel Campbell ◽  
Kyndall Hildebrandt ◽  
Carla Cicero ◽  
Dusty McDonald ◽  
...  

Arctos (https://arctosdb.org), an online collection management information system, was developed in 1999 to manage museum specimen data and to make those data publicly available. The portal (arctos.database.museum) now serves data on over 3.5 million cataloged specimens from more than 130 collections throughout North America in an instance at the Texas Advanced Computing Center. Arctos also is a community of museum professionals that collaborates on museum best practices and works together to improve Arctos data richness and functionality for on-line museum data streaming. In 2017, three large Arctos genomics collections at the Museum of Southwestern Biology (MSB), Museum of Vertebrate Zoology, Berkeley (MVZ), and University of Alaska Museum of the North (UAM), received support from GGBN to create a pipeline for publishing data from Arctos to the GGBN portal. Modifications to Arctos included standardization of controlled vocabulary for tissues; changes to the data structure and code tables with regard to permit information, container history, part attributes, and sample quality; implementation of interfaces and protocols for parent-child relationships between tissues, tissue subsamples, and DNA extracts; and coordination with the DWC community to ensure that all GGBN data standards and formatting are included in the standard DWC export in order to finalize the pipeline to GGBN. The addition of these three primary Arctos biorepositories to the GGBN network will add over 750,000 tissue and DNA records representing over 11,000 species and 667 families. These voucher-based archives represent primarily vertebrate taxa, with growing collections of arthropods, endoparasites, and incipient collections of microbiome and environmental samples associated with online media and linked to GenBank and other external databases. The high-quality data in Arctos complement and significantly extend existing GGBN holdings, and the establishment of an Arctos-GGBN pipeline also will facilitate future collaboration between more Arctos collections and GGBN.


2021 ◽  
Vol 53 (2) ◽  
Author(s):  
Yogesh C. Bangar ◽  
Ankit Magotra ◽  
B. S. Malik ◽  
Z. S. Malik ◽  
A. S. Yadav

Author(s):  
Marco Amorim ◽  
Sara Ferreira ◽  
António Couto

In an era of information and advanced computing power, emergency medical services (EMS) still rely on rudimentary vehicle dispatching and reallocation rules. In many countries, road conditions such as traffic or road blocks, exact vehicle positions, and demand prediction are valuable information that is not considered when locating and dispatching emergency vehicles. Within this context, this paper presents an investigation of different EMS vehicle dispatching rules by comparing them using various metrics and frameworks. An intelligent dispatching algorithm is proposed, and survival metrics are introduced to compare the new concepts with the classic ones. This work shows that the closest idle vehicle rule (classic dispatching rule) is far from optimal and even a random dispatching of vehicles can outperform it. The proposed intelligent algorithm has the best performance in all the tested situations where resources are adequate. If resources are scarce, especially during peaks in demand, dispatching delays will occur, degrading the system’s performance. In this case, no conclusion could be drawn as to which rule might be the best option. Nevertheless, it draws attention to the need for research focused on managing dispatch delays by prioritizing the waiting calls that inflict the higher penalty on the system performance. Finally, the authors conclude that the use of real traffic information introduces a considerable gain to the EMS response performance.


The international conference is organized jointly by Dorodnicyn Computing Center of Federal Research Center “Computer Science and Control” of Russian Academy of Science and Peoples’ Friendship University of Russia. The talks presented at the conference discuss actual problems of computer algebra — the discipline whose algorithms are focused on the exact solution of mathematical and applied problems using a computer.


Sign in / Sign up

Export Citation Format

Share Document