Variable-Gain Constraint Stabilization for General Multibody Systems with Applications

2004 ◽  
Vol 10 (9) ◽  
pp. 1335-1357
Author(s):  
Takashi Nagata

This paper presents a general and efficient formulation applicable to a vast variety of rigid and flexible multibody systems. It is based on a variable-gain error correction with scaling and adaptive control of the convergence parameter. The methodology has the following distinctive features. (i) All types of holonomic and non-holonomic equality constraints as well as a class of inequalities can be treated in a plain and unified manner. (ii) Stability of the constraints is assured. (iii) The formulation has an order Ncomputational cost in terms of both the constrained and unconstrained degrees of freedom, regardless of the system topology. (iv) Unlike the traditional recursive order Nalgorithms, it is quite amenable to parallel computation. (v) Because virtually no matrix operations are involved, it can be implemented to very simple general-purpose simulation programs. Noting the advantages, the algorithm has been realized as a C++ code supporting distributed processing through the Message-Passing Interface (MPI). Versatility, dynamical validity and efficiency of the approach are demonstrated through numerical studies of several particular systems including a crawler and a flexible space structure.

Author(s):  
Gilbert Gede ◽  
Dale L. Peterson ◽  
Angadh S. Nanjangud ◽  
Jason K. Moore ◽  
Mont Hubbard

Symbolic equations of motion (EOMs) for multibody systems are desirable for simulation, stability analyses, control system design, and parameter studies. Despite this, the majority of engineering software designed to analyze multibody systems are numeric in nature (or present a purely numeric user interface). To our knowledge, none of the existing software packages are 1) fully symbolic, 2) open source, and 3) implemented in a popular, general, purpose high level programming language. In response, we extended SymPy (an existing computer algebra system implemented in Python) with functionality for derivation of symbolic EOMs for constrained multibody systems with many degrees of freedom. We present the design and implementation of the software and cover the basic usage and workflow for solving and analyzing problems. The intended audience is the academic research community, graduate and advanced undergraduate students, and those in industry analyzing multibody systems. We demonstrate the software by deriving the EOMs of a N-link pendulum, show its capabilities for LATEX output, and how it integrates with other Python scientific libraries — allowing for numerical simulation, publication quality plotting, animation, and online notebooks designed for sharing results. This software fills a unique role in dynamics and is attractive to academics and industry because of its BSD open source license which permits open source or commercial use of the code.


2020 ◽  
Vol 245 ◽  
pp. 09016
Author(s):  
Maria Alandes Pradillo ◽  
Nils Høimyr ◽  
Pablo Llopis Sanmillan ◽  
Markus Tapani Jylhänkangas

The CERN IT department has been maintaining different High Performance Computing (HPC) services over the past five years. While the bulk of computing facilities at CERN are running under Linux, a Windows cluster was dedicated for engineering simulations and analysis related to accelerator technology development. The Windows cluster consisted of machines with powerful CPUs, big memory, and a low-latency interconnect. The Linux cluster resources are accessible through HTCondor, and are used for general purpose parallel but single-node type jobs, providing computing power to the CERN experiments and departments for tasks such as physics event reconstruction, data analysis, and simulation. For HPC workloads that require multi-node parallel environments for Message Passing Interface (MPI) based programs, there is another Linux-based HPC service that is comprised of several clusters running under the Slurm batch system, and consist of powerful hardware with low-latency interconnects. In 2018, it was decided to consolidate compute intensive jobs in Linux to make a better use of the existing resources. Moreover, this was also in line with CERN IT strategy to reduce its dependencies on Microsoft products. This paper focuses on the migration of Ansys [1], COMSOL [2] and CST [3] users from Windows HPC to Linux clusters. Ansys, COMSOL and CST are three engineering applications used at CERN for different domains, like multiphysics simulations and electromagnetic field problems. Users of these applications are in different departments, with different needs and levels of expertise. In most cases, the users have no prior knowledge of Linux. The paper will present the technical strategy to allow the engineering users to submit their simulations to the appropriate Linux cluster, depending on their simulation requirements. We also describe the technical solution to integrate their Windows workstations in order from them to be able to submit to Linux clusters. Finally, we discuss the challenges and lessons learnt during the migration.


Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. T313-T333 ◽  
Author(s):  
Leonardo Zepeda-Núñez ◽  
Adrien Scheuer ◽  
Russell J. Hewett ◽  
Laurent Demanet

We have developed a fast solver for the 3D Helmholtz equation, in heterogeneous, constant density, acoustic media, in the high-frequency regime. The solver is based on the method of polarized traces, a layered domain-decomposition method, where the subdomains are connected via transmission conditions prescribed by the discrete Green’s representation formula and artificial reflections are avoided by enforcing nonreflecting boundary conditions between layers. The method of polarized traces allows us to consider only unknowns at the layer interfaces, reducing the overall cost and memory footprint of the solver. We determine that polarizing the wavefields in this manner yields an efficient preconditioner for the reduced system, whose rate of convergence is independent of the problem frequency. The resulting preconditioned system is solved iteratively using generalized minimum residual, where we never assemble the reduced system or preconditioner; rather, we implement them via solving the Helmholtz equation locally within the subdomains. The method is parallelized using Message Passing Interface and coupled with a distributed linear algebra library and pipelining to obtain an empirical on-line runtime [Formula: see text], where [Formula: see text] is the total number of degrees of freedom, [Formula: see text] is the number of subdomains, and [Formula: see text] is the number of right-hand sides (RHS). This scaling is favorable for regimes in which the number of sources (distinct RHS) is large, for example, enabling large-scale implementations of frequency-domain full-waveform inversion.


Author(s):  
K. Bhargavi ◽  
Sathish Babu B.

The GPUs (Graphics Processing Unit) were mainly used to speed up computation intensive high performance computing applications. There are several tools and technologies available to perform general purpose computationally intensive application. This chapter primarily discusses about GPU parallelism, applications, probable challenges and also highlights some of the GPU computing platforms, which includes CUDA, OpenCL (Open Computing Language), OpenMPC (Open MP extended for CUDA), MPI (Message Passing Interface), OpenACC (Open Accelerator), DirectCompute, and C++ AMP (C++ Accelerated Massive Parallelism). Each of these platforms is discussed briefly along with their advantages and disadvantages.


2020 ◽  
Vol 15 ◽  
Author(s):  
Weiwen Zhang ◽  
Long Wang ◽  
Theint Theint Aye ◽  
Juniarto Samsudin ◽  
Yongqing Zhu

Background: Genotype imputation as a service is developed to enable researchers to estimate genotypes on haplotyped data without performing whole genome sequencing. However, genotype imputation is computation intensive and thus it remains a challenge to satisfy the high performance requirement of genome wide association study (GWAS). Objective: In this paper, we propose a high performance computing solution for genotype imputation on supercomputers to enhance its execution performance. Method: We design and implement a multi-level parallelization that includes job level, process level and thread level parallelization, enabled by job scheduling management, message passing interface (MPI) and OpenMP, respectively. It involves job distribution, chunk partition and execution, parallelized iteration for imputation and data concatenation. Due to the design of multi-level parallelization, we can exploit the multi-machine/multi-core architecture to improve the performance of genotype imputation. Results: Experiment results show that our proposed method can outperform the Hadoop-based implementation of genotype imputation. Moreover, we conduct the experiments on supercomputers to evaluate the performance of the proposed method. The evaluation shows that it can significantly shorten the execution time, thus improving the performance for genotype imputation. Conclusion: The proposed multi-level parallelization, when deployed as an imputation as a service, will facilitate bioinformatics researchers in Singapore to conduct genotype imputation and enhance the association study.


Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2284
Author(s):  
Krzysztof Przystupa ◽  
Mykola Beshley ◽  
Olena Hordiichuk-Bublivska ◽  
Marian Kyryk ◽  
Halyna Beshley ◽  
...  

The problem of analyzing a big amount of user data to determine their preferences and, based on these data, to provide recommendations on new products is important. Depending on the correctness and timeliness of the recommendations, significant profits or losses can be obtained. The task of analyzing data on users of services of companies is carried out in special recommendation systems. However, with a large number of users, the data for processing become very big, which causes complexity in the work of recommendation systems. For efficient data analysis in commercial systems, the Singular Value Decomposition (SVD) method can perform intelligent analysis of information. With a large amount of processed information we proposed to use distributed systems. This approach allows reducing time of data processing and recommendations to users. For the experimental study, we implemented the distributed SVD method using Message Passing Interface, Hadoop and Spark technologies and obtained the results of reducing the time of data processing when using distributed systems compared to non-distributed ones.


1996 ◽  
Vol 22 (6) ◽  
pp. 789-828 ◽  
Author(s):  
William Gropp ◽  
Ewing Lusk ◽  
Nathan Doss ◽  
Anthony Skjellum

1987 ◽  
Vol 109 (1) ◽  
pp. 65-69 ◽  
Author(s):  
K. W. Matta

A technique for the selection of dynamic degrees of freedom (DDOF) of large, complex structures for dynamic analysis is described and the formulation of Ritz basis vectors for static condensation and component mode synthesis is presented. Generally, the selection of DDOF is left to the judgment of engineers. For large, complex structures, however, a danger of poor or improper selection of DDOF exists. An improper selection may result in singularity of the eigenvalue problem, or in missing some of the lower frequencies. This technique can be used to select the DDOF to reduce the size of large eigenproblems and to select the DDOF to eliminate the singularities of the assembled eigenproblem of component mode synthesis. The execution of this technique is discussed in this paper. Examples are given for using this technique in conjunction with a general purpose finite element computer program GENSAM[1].


2013 ◽  
Vol 718-720 ◽  
pp. 1645-1650
Author(s):  
Gen Yin Cheng ◽  
Sheng Chen Yu ◽  
Zhi Yong Wei ◽  
Shao Jie Chen ◽  
You Cheng

Commonly used commercial simulation software SYSNOISE and ANSYS is run on a single machine (can not directly run on parallel machine) when use the finite element and boundary element to simulate muffler effect, and it will take more than ten days, sometimes even twenty days to work out an exact solution as the large amount of numerical simulation. Use a high performance parallel machine which was built by 32 commercial computers and transform the finite element and boundary element simulation software into a program that can running under the MPI (message passing interface) parallel environment in order to reduce the cost of numerical simulation. The relevant data worked out from the simulation experiment demonstrate that the result effect of the numerical simulation is well. And the computing speed of the high performance parallel machine is 25 ~ 30 times a microcomputer.


Sign in / Sign up

Export Citation Format

Share Document