scholarly journals Numerical Aspects of Particle-in-Cell Simulations for Plasma-Motion Modeling of Electric Thrusters

Aerospace ◽  
2021 ◽  
Vol 8 (5) ◽  
pp. 138
Author(s):  
Giuseppe Gallo ◽  
Adriano Isoldi ◽  
Dario Del Gatto ◽  
Raffaele Savino ◽  
Amedeo Capozzoli ◽  
...  

The present work is focused on a detailed description of an in-house, particle-in-cell code developed by the authors, whose main aim is to perform highly accurate plasma simulations on an off-the-shelf computing platform in a relatively short computational time, despite the large number of macro-particles employed in the computation. A smart strategy to set up the code is proposed, and in particular, the parallel calculation in GPU is explored as a possible solution for the reduction in computing time. An application on a Hall-effect thruster is shown to validate the PIC numerical model and to highlight the strengths of introducing highly accurate schemes for the electric field interpolation and the macroparticle trajectory integration in the time. A further application on a helicon double-layer thruster is presented, in which the particle-in-cell (PIC) code is used as a fast tool to analyze the performance of these specific electric motors.

2019 ◽  
Vol 85 (2) ◽  
Author(s):  
G. Gallina ◽  
M. Magarotto ◽  
M. Manente ◽  
Daniele Pavarin

EDI (enhanced biDimensional pIc) is a two-dimensional (2-D) electrostatic/magnetostatic particle-in-cell (PIC) code designed to optimize plasma based systems. The code is built on an unstructured mesh of triangles, allowing for arbitrary geometries. The PIC core is comprised of a Boris leapfrog scheme that can manage multiple species. Particle tracking locates particles in the mesh, using a fast and simple priority-sorting algorithm. A magnetic field with an arbitrary topology can be imposed to study the magnetized particle dynamics. The electrostatic fields are then computed by solving Poisson’s equation with a a finite element method solver. The latter is an external solver that has been properly modified in order to be integrated into EDI. The major advantage of using an external solver directly incorporated into the EDI structure is its strong flexibility, in fact it is possible to couple together different physical problems (electrostatic, magnetostatic, etc.). EDI is written in C, which allows the rapid development of new modules. A big effort in the development of the code has been made in optimization of the linking efficiency, in order to minimize computational time. Finally, EDI is a multiplatform (Linux, Mac OS X) software.


Author(s):  
Reza Alebrahim ◽  
Pawel Packo ◽  
Mirco Zaccariotto ◽  
Ugo Galvanetto

In this study, methods to mitigate anomalous wave propagation in 2-D Bond-Based Peridynamics (PD) are presented. Similarly to what happens in classical non-local models, an irregular wave transmission phenomenon occurs at high frequencies. This feature of the dynamic performance of PD, limits its potential applications. A minimization method based on the weighted residual point collocation is introduced to substantially extend the frequency range of wave motion modeling. The optimization problem, developed through inverse analysis, is set up by comparing exact and numerical dispersion curves and minimizing the error in the frequency-wavenumber domain. A significant improvement in the wave propagation simulation using Bond-Based PD is observed.


Author(s):  
Ajay Jasra ◽  
Maria De Iorio ◽  
Marc Chadeau-Hyam

In this paper, we consider a simulation technique for stochastic trees. One of the most important areas in computational genetics is the calculation and subsequent maximization of the likelihood function associated with such models. This typically consists of using importance sampling and sequential Monte Carlo techniques. The approach proceeds by simulating the tree, backward in time from observed data, to a most recent common ancestor. However, in many cases, the computational time and variance of estimators are often too high to make standard approaches useful. In this paper, we propose to stop the simulation, subsequently yielding biased estimates of the likelihood surface. The bias is investigated from a theoretical point of view. Results from simulation studies are also given to investigate the balance between loss of accuracy, saving in computing time and variance reduction.


2021 ◽  
Author(s):  
Derek Neben ◽  
Michael Weller ◽  
Evan Scott
Keyword(s):  

Proceedings ◽  
2018 ◽  
Vol 2 (22) ◽  
pp. 1400
Author(s):  
Johannes Schmelcher ◽  
Max Kleine Büning ◽  
Kai Kreisköther ◽  
Dieter Gerling ◽  
Achim Kampker

Energy-efficient electric motors are gathering an increased attention since they are used in electric cars or to reduce operational costs, for instance. Due to their high efficiency, permanent-magnet synchronous motors are used progressively more. However, the need to use rare-earth magnets for such high-efficiency motors is problematic not only in regard to the cost but also in socio-political and environmental aspects. Therefore, an increasing effort has to be put in finding the best design possible. The goals to achieve are, among others, to reduce the amount of rare-earth magnet material but also to increase the efficiency. In the first part of this multipart paper, characteristics of optimization problems in engineering and general methods to solve them are presented. In part two, different approaches to the design optimization problem of electric motors are highlighted. The last part will evaluate the different categories of optimization methods with respect to the criteria: degrees of freedom, computing time and the required user experience. As will be seen, there is a conflict of objectives regarding the criteria mentioned above. Requirements, which a new optimization method has to fulfil in order to solve the conflict of objectives will be presented in this last paper.


Author(s):  
Anju Gupta ◽  
R K Bathla

With so many people now wearing mobile devices with sensors (such as smartphones), utilizing the immense capabilities of these business mobility goods has become a prospective skill to significant behavioural and ecological sensors. A potential challenge for pervasive context assessment is opportunistic sensing, has been effectively used to a wide range of applications. The sensor cloud combines cloud technology with a wireless sensor, resulting in a scalable and cost-effective computing platform for real-time applications. Because the sensor's battery power is limited and the data centre’s servers consume a significant amount of energy to supply storage, a sensor cloud must be energy efficient. This study provides a Fog-based semantic for enabling these kinds of technologies quickly and successfully. The suggested structure is comprised of fundamental algorithms to help set up and coordinate the fog sensing jobs. It creates effective multihop routes for coordinating relevant devices and transporting acquired sensory data to fog sinks. It was claimed that energy-efficient sensor cloud approaches were categorized into different groups and that each technology was examined using numerous characteristics. The outcomes of a series of thorough test simulation in NS3 to define the practicality of the created console, as well as the proportion of each parameter utilized for each technology, are computed.


2021 ◽  
Author(s):  
Carlo Cristiano Stabile ◽  
Marco Barbiero ◽  
Giorgio Fighera ◽  
Laura Dovera

Abstract Optimizing well locations for a green field is critical to mitigate development risks. Performing such workflows with reservoir simulations is very challenging due to the huge computational cost. Proxy models can instead provide accurate estimates at a fraction of the computing time. This study presents an application of new generation functional proxies to optimize the well locations in a real oil field with respect to the actualized oil production on all the different geological realizations. Proxies are built with the Universal Trace Kriging and are functional in time allowing to actualize oil flows over the asset lifetime. Proxies are trained on the reservoir simulations using randomly sampled well locations. Two proxies are created for a pessimistic model (P10) and a mid-case model (P50) to capture the geological uncertainties. The optimization step uses the Non-dominated Sorting Genetic Algorithm, with discounted oil productions of the two proxies, as objective functions. An adaptive approach was employed: optimized points found from a first optimization were used to re-train the proxy models and a second run of optimization was performed. The methodology was applied on a real oil reservoir to optimize the location of four vertical production wells and compared against reference locations. 111 geological realizations were available, in which one relevant uncertainty is the presence of possible compartments. The decision space represented by the horizontal translation vectors for each well was sampled using Plackett-Burman and Latin-Hypercube designs. A first application produced a proxy with poor predictive quality. Redrawing the areas to avoid overlaps and to confine the decision space of each well in one compartment, improved the quality. This suggests that the proxy predictive ability deteriorates in presence of highly non-linear responses caused by sealing faults or by well interchanging positions. We then followed a 2-step adaptive approach: a first optimization was performed and the resulting Pareto front was validated with reservoir simulations; to further improve the proxy quality in this region of the decision space, the validated Pareto front points were added to the initial dataset to retrain the proxy and consequently rerun the optimization. The final well locations were validated on all 111 realizations with reservoir simulations and resulted in an overall increase of the discounted production of about 5% compared to the reference development strategy. The adaptive approach, combined with functional proxy, proved to be successful in improving the workflow by purposefully increasing the training set samples with data points able to enhance the optimization step effectiveness. Each optimization run performed relies on about 1 million proxy evaluations which required negligible computational time. The same workflow carried out with standard reservoir simulations would have been practically unfeasible.


2010 ◽  
Vol 5 (2) ◽  
pp. 85-97
Author(s):  
Andrey V. Terekhov ◽  
Igor V. Timofeev ◽  
Konstantin V. Lotov

A two-dimensional particle-in-cell numerical model is developed to simulate collective relaxation of powerful electron beams in plasmas. To increase the efficiency of parallel particle-in-cell simulations on supercomputers, the Dichotomy Algorithm is used for inversion of the Laplace operator. The proposed model is tested with several well-known physical phenomena and is shown to adequately simulate basic effects of the beam driven turbulence. Also, the modulational instability is studied in the regime when the energy of pumping wave significantly exceeds the thermal plasma energy


Author(s):  
A. Martinez de la Ossa ◽  
R. W. Assmann ◽  
M. Bussmann ◽  
S. Corde ◽  
J. P. Couperus Cabadağ ◽  
...  

We present a conceptual design for a hybrid laser-driven plasma wakefield accelerator (LWFA) to beam-driven plasma wakefield accelerator (PWFA). In this set-up, the output beams from an LWFA stage are used as input beams of a new PWFA stage. In the PWFA stage, a new witness beam of largely increased quality can be produced and accelerated to higher energies. The feasibility and the potential of this concept is shown through exemplary particle-in-cell simulations. In addition, preliminary simulation results for a proof-of-concept experiment in Helmholtz-Zentrum Dresden-Rossendorf (Germany) are shown. This article is part of the Theo Murphy meeting issue ‘Directions in particle beam-driven plasma wakefield acceleration’.


Sign in / Sign up

Export Citation Format

Share Document