A thread‐block‐wise computational framework for large‐scale hierarchical continuum‐discrete modeling of granular media

2020 ◽  
Vol 122 (2) ◽  
pp. 579-608 ◽  
Author(s):  
Shiwei Zhao ◽  
Jidong Zhao ◽  
Weijian Liang
2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Mohammadreza Yaghoobi ◽  
Krzysztof S. Stopka ◽  
Aaditya Lakshmanan ◽  
Veera Sundararaghavan ◽  
John E. Allison ◽  
...  

AbstractThe PRISMS-Fatigue open-source framework for simulation-based analysis of microstructural influences on fatigue resistance for polycrystalline metals and alloys is presented here. The framework uses the crystal plasticity finite element method as its microstructure analysis tool and provides a highly efficient, scalable, flexible, and easy-to-use ICME community platform. The PRISMS-Fatigue framework is linked to different open-source software to instantiate microstructures, compute the material response, and assess fatigue indicator parameters. The performance of PRISMS-Fatigue is benchmarked against a similar framework implemented using ABAQUS. Results indicate that the multilevel parallelism scheme of PRISMS-Fatigue is more efficient and scalable than ABAQUS for large-scale fatigue simulations. The performance and flexibility of this framework is demonstrated with various examples that assess the driving force for fatigue crack formation of microstructures with different crystallographic textures, grain morphologies, and grain numbers, and under different multiaxial strain states, strain magnitudes, and boundary conditions.


2021 ◽  
Author(s):  
Maxwell Adam Levinson ◽  
Justin Niestroy ◽  
Sadnan Al Manir ◽  
Karen Fairchild ◽  
Douglas E. Lake ◽  
...  

AbstractResults of computational analyses require transparent disclosure of their supporting resources, while the analyses themselves often can be very large scale and involve multiple processing steps separated in time. Evidence for the correctness of any analysis should include not only a textual description, but also a formal record of the computations which produced the result, including accessible data and software with runtime parameters, environment, and personnel involved. This article describes FAIRSCAPE, a reusable computational framework, enabling simplified access to modern scalable cloud-based components. FAIRSCAPE fully implements the FAIR data principles and extends them to provide fully FAIR Evidence, including machine-interpretable provenance of datasets, software and computations, as metadata for all computed results. The FAIRSCAPE microservices framework creates a complete Evidence Graph for every computational result, including persistent identifiers with metadata, resolvable to the software, computations, and datasets used in the computation; and stores a URI to the root of the graph in the result’s metadata. An ontology for Evidence Graphs, EVI (https://w3id.org/EVI), supports inferential reasoning over the evidence. FAIRSCAPE can run nested or disjoint workflows and preserves provenance across them. It can run Apache Spark jobs, scripts, workflows, or user-supplied containers. All objects are assigned persistent IDs, including software. All results are annotated with FAIR metadata using the evidence graph model for access, validation, reproducibility, and re-use of archived data and software.


2021 ◽  
pp. 004728752110247
Author(s):  
Vinh Bui ◽  
Ali Reza Alaei ◽  
Huy Quan Vu ◽  
Gang Li ◽  
Rob Law

Understanding and being able to measure, analyze, compare, and contrast the image of a tourism destination, also known as tourism destination image (TDI), is critical in tourism management and destination marketing. Although various methodologies have been developed, a consistent, reliable, and scalable method for measuring TDI is still unavailable. This study aims to address the challenge by proposing a framework for a holistic measure of TDI in four dimensions, including popularity, sentiment, time, and location. A structural model for TDI measurement that covers various aspects of a tourism destination is developed. TDI is then measured by a comprehensive computational framework that can analyze complex textual and visual data on a large scale. A case study using more than 30,000 images, and 10,000 comments in relation to three tourism destinations in Australia demonstrates the effectiveness of the proposed framework.


2019 ◽  
Author(s):  
Anna Danese ◽  
Maria L. Richter ◽  
David S. Fischer ◽  
Fabian J. Theis ◽  
Maria Colomé-Tatché

ABSTRACTEpigenetic single-cell measurements reveal a layer of regulatory information not accessible to single-cell transcriptomics, however single-cell-omics analysis tools mainly focus on gene expression data. To address this issue, we present epiScanpy, a computational framework for the analysis of single-cell DNA methylation and single-cell ATAC-seq data. EpiScanpy makes the many existing RNA-seq workflows from scanpy available to large-scale single-cell data from other -omics modalities. We introduce and compare multiple feature space constructions for epigenetic data and show the feasibility of common clustering, dimension reduction and trajectory learning techniques. We benchmark epiScanpy by interrogating different single-cell brain mouse atlases of DNA methylation, ATAC-seq and transcriptomics. We find that differentially methylated and differentially open markers between cell clusters enrich transcriptome-based cell type labels by orthogonal epigenetic information.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Agrim Gupta ◽  
Silvio Savarese ◽  
Surya Ganguli ◽  
Li Fei-Fei

AbstractThe intertwined processes of learning and evolution in complex environmental niches have resulted in a remarkable diversity of morphological forms. Moreover, many aspects of animal intelligence are deeply embodied in these evolved morphologies. However, the principles governing relations between environmental complexity, evolved morphology, and the learnability of intelligent control, remain elusive, because performing large-scale in silico experiments on evolution and learning is challenging. Here, we introduce Deep Evolutionary Reinforcement Learning (DERL): a computational framework which can evolve diverse agent morphologies to learn challenging locomotion and manipulation tasks in complex environments. Leveraging DERL we demonstrate several relations between environmental complexity, morphological intelligence and the learnability of control. First, environmental complexity fosters the evolution of morphological intelligence as quantified by the ability of a morphology to facilitate the learning of novel tasks. Second, we demonstrate a morphological Baldwin effect i.e., in our simulations evolution rapidly selects morphologies that learn faster, thereby enabling behaviors learned late in the lifetime of early ancestors to be expressed early in the descendants lifetime. Third, we suggest a mechanistic basis for the above relationships through the evolution of morphologies that are more physically stable and energy efficient, and can therefore facilitate learning and control.


2020 ◽  
Author(s):  
Philipp Eichheimer ◽  
Marcel Thielmann ◽  
Wakana Fujita ◽  
Gregor J. Golabek ◽  
Michihiko Nakamura ◽  
...  

Abstract. Fluid flow on different scales is of interest for several Earth science disciplines like petrophysics, hydrogeology and volcanology. To parameterize fluid flow in large-scale numerical simulations (e.g. groundwater and volcanic systems), flow properties on the microscale need to be considered. For this purpose experimental and numerical investigations of flow through porous media over a wide range of porosities are necessary. In the present study we sinter glass bead media with various porosities. The microstructure, namely effective porosity and effective specific surface, is investigated using image processing. We determine flow properties like hydraulic tortuosity and permeability using both experimental measurements and numerical simulations. By fitting microstructural and flow properties to porosity, we obtain a modified Kozeny-Carman equation for isotropic low-porosity media, that can be used to simulate permeability in large-scale numerical models. To verify the modified Kozeny-Carman equation we compare it to the computed and measured permeability values.


2018 ◽  
Vol 35 (3) ◽  
pp. 380-388 ◽  
Author(s):  
Wei Zheng ◽  
Qi Mao ◽  
Robert J Genco ◽  
Jean Wactawski-Wende ◽  
Michael Buck ◽  
...  

Abstract Motivation The rapid development of sequencing technology has led to an explosive accumulation of genomic data. Clustering is often the first step to be performed in sequence analysis. However, existing methods scale poorly with respect to the unprecedented growth of input data size. As high-performance computing systems are becoming widely accessible, it is highly desired that a clustering method can easily scale to handle large-scale sequence datasets by leveraging the power of parallel computing. Results In this paper, we introduce SLAD (Separation via Landmark-based Active Divisive clustering), a generic computational framework that can be used to parallelize various de novo operational taxonomic unit (OTU) picking methods and comes with theoretical guarantees on both accuracy and efficiency. The proposed framework was implemented on Apache Spark, which allows for easy and efficient utilization of parallel computing resources. Experiments performed on various datasets demonstrated that SLAD can significantly speed up a number of popular de novo OTU picking methods and meanwhile maintains the same level of accuracy. In particular, the experiment on the Earth Microbiome Project dataset (∼2.2B reads, 437 GB) demonstrated the excellent scalability of the proposed method. Availability and implementation Open-source software for the proposed method is freely available at https://www.acsu.buffalo.edu/~yijunsun/lab/SLAD.html. Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 16 (1) ◽  
pp. 60-68 ◽  
Author(s):  
Jorge C. Navarro-Muñoz ◽  
Nelly Selem-Mojica ◽  
Michael W. Mullowney ◽  
Satria A. Kautsar ◽  
James H. Tryon ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document