Parallel implementation of large-scale structural optimization

1998 ◽  
Vol 16 (2) ◽  
pp. 176 ◽  
Author(s):  
S.L. Padula ◽  
S.C. Stone
1998 ◽  
Vol 16 (2-3) ◽  
pp. 176-185 ◽  
Author(s):  
S. L. Padula ◽  
S. C. Stone

2013 ◽  
Vol 1535 ◽  
Author(s):  
Svetozara I. Petrova

ABSTRACTWe consider the modeling and simulation of multiscale phenomena which arise in finding the optimal shape design of microcellular composite materials with heterogeneous microstructures. The paper focuses on the solution of the resulting partial differential equation (PDE) constrained structural optimization problem and development of efficient multiscale numerical algorithms which are challenging tools in reducing the computational complexity. The modeling strategy is applied in materials science for microstructural ceramic materials of multiple constituents. Our multiscale method is based on the efficient combination of both macroscopic and microscopic models. The homogenization technique based on the concept of strong separation of scales and the asymptotic expansion of the unknown displacements is applied to extract the macroscopic information from the microscale model.In the framework of all-at-once approach we find a proper combination of the iterative procedure for the nonlinear problem arising from the first order necessary optimality conditions, also known as Karush-Kuhn-Tucker (KKT) conditions, and efficient large-scale solvers for the stress-strain constitutive equation. We use the path-following predictor-corrector schemes by means of Newton's method and fast multigrid (MG) solution techniques. The performance of two preconditioners, incomplete Cholesky (IC) and algebraic multigrid (AMG), for the resulting homogenized state equation is studied. The comparative analysis for both preconditioners in terms of number of iterations and computing times is presented and discussed. Our interests focus on the parallel implementation of the preconditioning techniques and the use of BoomerAMG as a part of the free software library Hypre developed at the Center for Applied Scientific Computing (CASC), Lawrence Livermore National Laboratory (LLNL).


2021 ◽  
Vol 13 (2) ◽  
pp. 176
Author(s):  
Peng Zheng ◽  
Zebin Wu ◽  
Jin Sun ◽  
Yi Zhang ◽  
Yaoqin Zhu ◽  
...  

As the volume of remotely sensed data grows significantly, content-based image retrieval (CBIR) becomes increasingly important, especially for cloud computing platforms that facilitate processing and storing big data in a parallel and distributed way. This paper proposes a novel parallel CBIR system for hyperspectral image (HSI) repository on cloud computing platforms under the guide of unmixed spectral information, i.e., endmembers and their associated fractional abundances, to retrieve hyperspectral scenes. However, existing unmixing methods would suffer extremely high computational burden when extracting meta-data from large-scale HSI data. To address this limitation, we implement a distributed and parallel unmixing method that operates on cloud computing platforms in parallel for accelerating the unmixing processing flow. In addition, we implement a global standard distributed HSI repository equipped with a large spectral library in a software-as-a-service mode, providing users with HSI storage, management, and retrieval services through web interfaces. Furthermore, the parallel implementation of unmixing processing is incorporated into the CBIR system to establish the parallel unmixing-based content retrieval system. The performance of our proposed parallel CBIR system was verified in terms of both unmixing efficiency and accuracy.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Sai Kiranmayee Samudrala ◽  
Jaroslaw Zola ◽  
Srinivas Aluru ◽  
Baskar Ganapathysubramanian

Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.


2017 ◽  
Vol 686 ◽  
pp. 103-110 ◽  
Author(s):  
Genhua Wu ◽  
Yan Sun ◽  
Xia Wu ◽  
Run Chen ◽  
Yan Wang

2014 ◽  
Author(s):  
Jason W Sahl ◽  
Greg Caporaso ◽  
David A Rasko ◽  
Paul S Keim

Background. As whole genome sequence data from bacterial isolates becomes cheaper to generate, computational methods are needed to correlate sequence data with biological observations. Here we present the large-scale BLAST score ratio (LS-BSR) pipeline, which rapidly compares the genetic content of hundreds to thousands of bacterial genomes, and returns a matrix that describes the relatedness of all coding sequences (CDSs) in all genomes surveyed. This matrix can be easily parsed in order to identify genetic relationships between bacterial genomes. Although pipelines have been published that group peptides by sequence similarity, no other software performs the large-scale, flexible, full-genome comparative analyses carried out by LS-BSR. Results. To demonstrate the utility of the method, the LS-BSR pipeline was tested on 96 Escherichia coli and Shigella genomes; the pipeline ran in 163 minutes using 16 processors, which is a greater than 7-fold speedup compared to using a single processor. The BSR values for each CDS, which indicate a relative level of relatedness, were then mapped to each genome on an independent core genome single nucleotide polymorphism (SNP) based phylogeny. Comparisons were then used to identify clade specific CDS markers and validate the LS-BSR pipeline based on molecular markers that delineate between classical E. coli pathogenic variant (pathovar) designations. Scalability tests demonstrated that the LS-BSR pipeline can process 1,000 E. coli genomes in ~60h using 16 processors. Conclusions. LS-BSR is an open-source, parallel implementation of the BSR algorithm, enabling rapid comparison of the genetic content of large numbers of genomes. The results of the pipeline can be used to identify specific markers between user-defined phylogenetic groups, and to identify the loss and/or acquisition of genetic information between bacterial isolates. Taxa-specific genetic markers can then be translated into clinical diagnostics, or can be used to identify broadly conserved putative therapeutic candidates.


Sign in / Sign up

Export Citation Format

Share Document