scholarly journals A close-encounter method for simulating the dynamics of planetesimals

2020 ◽  
Vol 644 ◽  
pp. A14
Author(s):  
Sebastian Lorek ◽  
Anders Johansen

The dynamics of planetesimals plays an important role in planet formation because their velocity distribution sets the growth rate to larger bodies. When planetesimals form in the gaseous environment of protoplanetary discs, their orbits are nearly circular and planar due to the effect of gas drag. However, mutual close encounters of the planetesimals increase eccentricities and inclinations until an equilibrium between stirring and damping is reached. After disc dissipation there is no more gas that damps the motion and mutual close encounters as well as encounters with planets stir the orbits again. After disc dissipation there is no gas that can damp the motion, and mutual close encounters and encounters with planets can stir the orbits. The large number of planetesimals in protoplanetary discs makes it difficult to simulate their dynamics by means of direct N-body simulations of planet formation. Therefore, we developed a novel method for the dynamical evolution of planetesimals that is based on following close encounters between planetesimal-mass bodies and gravitational stirring by planet-mass bodies. To separate the orbital motion from the close encounters we employ a Hamiltonian splitting scheme, as used in symplectic N-body integrators. Close encounters are identified using a cell algorithm with linear scaling in the number of bodies. A grouping algorithm is used to create small groups of interacting bodies which are integrated separately. Our method can simulate a large number of planetesimals interacting through gravity and collisions at low computational cost. The typical computational time is of the order of minutes or hours, up to a few days for more complex simulations, compared to several hours or even weeks for the same setup with full N-body. The dynamical evolution of the bodies is sufficiently well reproduced. This will make it possible to study the growth of planetesimals through collisions and pebble accretion coupled to their dynamics for a much larger number of bodies than previously accessible with full N-body simulations.

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


2004 ◽  
Vol 126 (2) ◽  
pp. 268-276 ◽  
Author(s):  
Paolo Boncinelli ◽  
Filippo Rubechini ◽  
Andrea Arnone ◽  
Massimiliano Cecconi ◽  
Carlo Cortese

A numerical model was included in a three-dimensional viscous solver to account for real gas effects in the compressible Reynolds averaged Navier-Stokes (RANS) equations. The behavior of real gases is reproduced by using gas property tables. The method consists of a local fitting of gas data to provide the thermodynamic property required by the solver in each solution step. This approach presents several characteristics which make it attractive as a design tool for industrial applications. First of all, the implementation of the method in the solver is simple and straightforward, since it does not require relevant changes in the solver structure. Moreover, it is based on a low-computational-cost algorithm, which prevents a considerable increase in the overall computational time. Finally, the approach is completely general, since it allows one to handle any type of gas, gas mixture or steam over a wide operative range. In this work a detailed description of the model is provided. In addition, some examples are presented in which the model is applied to the thermo-fluid-dynamic analysis of industrial turbomachines.


Author(s):  
Christopher Chahine ◽  
Joerg R. Seume ◽  
Tom Verstraete

Aerodynamic turbomachinery component design is a very complex task. Although modern CFD solvers allow for a detailed investigation of the flow, the interaction of design changes and the three dimensional flow field are highly complex and difficult to understand. Thus, very often a trial and error approach is applied and a design heavily relies on the experience of the designer and empirical correlations. Moreover, the simultaneous satisfaction of aerodynamic and mechanical requirements leads very often to tedious iterations between the different disciplines. Modern optimization algorithms can support the designer in finding high performing designs. However, many optimization methods require performance evaluations of a large number of different geometries. In the context of turbomachinery design, this often involves computationally expensive Computational Fluid Dynamics and Computational Structural Mechanics calculations. Thus, in order to reduce the total computational time, optimization algorithms are often coupled with approximation techniques often referred to as metamodels in the literature. Metamodels approximate the performance of a design at a very low computational cost and thus allow a time efficient automatic optimization. However, from the experiences gained in past optimizations it can be deduced that metamodel predictions are often not reliable and can even result in designs which are violating the imposed constraints. In the present work, the impact of the inaccuracy of a metamodel on the design optimization of a radial compressor impeller is investigated and it is shown if an optimization without the usage of a metamodel delivers better results. A multidisciplinary, multiobjective optimization system based on a Differential Evolution algorithm is applied which was developed at the von Karman Institute for Fluid Dynamics. The results show that the metamodel can be used efficiently to explore the design space at a low computational cost and to guide the search towards a global optimum. However, better performing designs can be found when excluding the metamodel from the optimization. Though, completely avoiding the metamodel results in a very high computational cost. Based on the obtained results in present work, a method is proposed which combines the advantages of both approaches, by first using the metamodel as a rapid exploration tool and then switching to the accurate optimization without metamodel for further exploitation of the design space.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 577
Author(s):  
Manuel Alcázar Vargas ◽  
Javier Pérez Fernández ◽  
Juan M. Velasco García ◽  
Juan A. Cabrera Carrillo ◽  
Juan J. Castillo Aguilar

The performance of vehicle safety systems depends very much on the accuracy of the signals coming from vehicle sensors. Among them, the wheel speed is of vital importance. This paper describes a new method to obtain the wheel speed by using Sin-Cos encoders. The methodology is based on the use of the Savitzky–Golay filters to optimally determine the coefficients of the polynomials that best fit the measured signals and their time derivatives. The whole process requires a low computational cost, which makes it suitable for real-time applications. This way it is possible to provide the safety system with an accurate measurement of both the angular speed and acceleration of the wheels. The proposed method has been compared to other conventional approaches. The results obtained in simulations and real tests show the superior performance of the proposed method, particularly for medium and low wheel angular speeds.


2022 ◽  
Author(s):  
Marcus Becker ◽  
Bastian Ritter ◽  
Bart Doekemeijer ◽  
Daan van der Hoek ◽  
Ulrich Konigorski ◽  
...  

Abstract. In this paper a new version of the FLOw Redirection and Induction Dynamics (FLORIDyn) model is presented. The new model uses the three-dimensional parametric Gaussian FLORIS model and can provide dynamic wind farm simulations at low computational cost under heterogeneous and changing wind conditions. Both FLORIS and FLORIDyn are parametric models which can be used to simulate wind farms, evaluate controller performance and can serve as a control-oriented model. One central element in which they differ is in their representation of flow dynamics: FLORIS neglects these and provides a computationally very cheap approximation of the mean wind farm flow. FLORIDyn defines a framework which utilizes this low computational cost of FLORIS to simulate basic wake dynamics: this is achieved by creating so called Observation Points (OPs) at each time step at the rotor plane which inherit the turbine state. In this work, we develop the initial FLORIDyn framework further considering multiple aspects. The underlying FLORIS wake model is replaced by a Gaussian wake model. The distribution and characteristics of the OPs are adapted to account for the new parametric model, but also to take complex flow conditions into account. To achieve this, a mathematical approach is developed to combine the parametric model and the changing, heterogeneous world conditions and link them with each OP. We also present a computational lightweight wind field model to allow for a simulation environment in which heterogeneous flow conditions are possible. FLORIDyn is compared to SOWFA simulations in three- and nine-turbine cases under static and changing environmental conditions.The results show a good agreement with the timing of the impact of upstream state changes on downstream turbines. They also show a good agreement in terms of how wakes are displaced by wind direction changes and when the resulting velocity deficit is experienced by downstream turbines. A good fit of the mean generated power is ensured by the underlying FLORIS model. In the three turbine case, FLORIDyn simulates 4 s simulation time in 24.49 ms computational time. The resulting new FLORIDyn model proves to be a computationally attractive and capable tool for model based dynamic wind farm control.


2011 ◽  
Vol 341-342 ◽  
pp. 478-483
Author(s):  
Wan Qi Li ◽  
Heng Wang ◽  
Che Nian ◽  
Huang Wei ◽  
Hong Yao You

A novel method of minimizing the embedding impact is proposed in this paper. Optimal embedding is achieved using network flow algorithms by considering the modifications on the cover image as flows of pixels among different states. This method is not an independent steganographic scheme, but rather it minimizes the embedding impact after the embedding process and it’s compatible with the majority of embedding techniques. Due to its dependence on the embedding process, many optimization problems, such as the minimization of a globally interactive distortion function, that are intractable during the embedding process can be solved with relatively low computational cost by rectifying the modifications on the cover image after the embedding process. A distortion function based on Kullback-Leibler divergence is provided as a concrete example to illustrate the basic idea of this method.


2020 ◽  
Author(s):  
Samuel O. Silva ◽  
Bruno O. Goulart ◽  
Maria Júlia M. Schettini ◽  
Carolina Xavier ◽  
João Gabriel Silva

The use of modeling and application of complex networks in several areas of knowledge have become an important tool for understanding different phenomena; among them some related to the structures and dissemination of information on social medias. In this sense, the use of a network's vertex ranking can be applied in the detection of influential nodes and possible foci of information diffusion. However, calculating the position of the vertices in some of these rankings may require a high computational cost. This paper presents a comparative study between six ranking metrics applied in different social medias. This comparison is made using the rank correlation coefficients. In addition, a study is presented on the computational time spent by each ranking. Results show that the Grau ranking metric has a greater correlation with other metrics and has low computational cost in its execution, making it an efficient indication in detecting influential nodes when there is a short term for the development of this activity.


Author(s):  
Paolo Boncinelli ◽  
Filippo Rubechini ◽  
Andrea Arnone ◽  
Massimiliano Cecconi ◽  
Carlo Cortese

A numerical model was included in a three-dimensional viscous solver to account for real gas effects in the compressible Reynolds Averaged Navier-Stokes (RANS) equations. The behavior of real gases is reproduced by using gas property tables. The method consists of a local fitting of gas data to provide the thermodynamic property required by the solver in each solution step. This approach presents several characteristics which make it attractive as a design tool for industrial applications. First of all, the implementation of the method in the solver is simple and straightforward, since it does not require relevant changes in the solver structure. Moreover, it is based on a low-computational-cost algorithm, which prevents a considerable increase in the overall computational time. Finally, the approach is completely general, since it allows one to handle any type of gas, gas mixture or steam over a wide operative range. In this work a detailed description of the model is provided. In addition, some examples are presented in which the model is applied to the thermo-fluid-dynamic analysis of industrial turbomachines.


Materials ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4316
Author(s):  
Diaa Emad ◽  
Mohamed A. Fanni ◽  
Abdelfatah M. Mohamed ◽  
Shigeo Yoshida

The large number of interdigitated electrodes (IDEs) in a macro fiber composite (MFC) piezoelectric actuator dictates using a very fine finite element (FE) mesh that requires extremely large computational costs, especially with a large number of actuators. The situation becomes infeasible if repeated finite element simulations are required, as in control tasks. In this paper, an efficient technique is proposed for modeling MFC using a finite element method. The proposed technique replaces the MFC actuator with an equivalent simple monolithic piezoceramic actuator using two electrodes only, which dramatically reduces the computational costs. The proposed technique was proven theoretically since it generates the same electric field, strain, and displacement as the physical MFC. Then, it was validated with the detailed FE model using the actual number of IDEs, as well as with experimental tests using triaxial rosette strain gauges. The computational costs for the simplified model compared with the detailed model were dramatically reduced by about 74% for memory usage, 99% for result file size, and 98.6% for computational time. Furthermore, the experimental results successfully verified the proposed technique with good consistency. To show the effectiveness of the proposed technique, it was used to simulate a morphing wing covered almost entirely by MFCs with low computational cost.


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document