Implementation of neutralizing fields for particle–particle simulations using like charges

2021 ◽  
Vol 87 (3) ◽  
Author(s):  
Yinjian Zhao ◽  
Chen Cui ◽  
Yanan Zhang ◽  
Yuan Hu

The particle–particle (PP) model has a growing number of applications in plasma simulations, because of its high accuracy of solving Coulomb collisions. One of the main issues restricting the practical use of the PP model is its large computational cost, which is now becoming acceptable thanks to state-of-art parallel computing techniques. Another issue is the singularity that occurs when two particles are too close. The most effective approach of avoiding the singularity would be to simulate particles with only like charges plus a neutralizing field, such that the short-range collisions are equivalent to those of using unlike charges. In this paper, we introduce a way of adding the neutralizing field by using the analytical solution of the electric field in the domain filled with uniformly distributed charges, for applications with homogeneous and quasi-neutral plasmas under a reflective boundary condition. Two most common Cartesian domain geometries, cubic and spherical, are considered. The model is verified by comparing simulation results with an analytical solution of an electron–ion temperature relaxation problem, and a corresponding simulation using unlike charges. In addition, it is found that a PP simulation using like charges can achieve a significant speed-up of 100 compared with a corresponding simulation using unlike charges, due to the capability of using larger time steps while maintaining the same energy conservation.

Author(s):  
Jimmy Ming-Tai Wu ◽  
Qian Teng ◽  
Shahab Tayeb ◽  
Jerry Chun-Wei Lin

AbstractThe high average-utility itemset mining (HAUIM) was established to provide a fair measure instead of genetic high-utility itemset mining (HUIM) for revealing the satisfied and interesting patterns. In practical applications, the database is dynamically changed when insertion/deletion operations are performed on databases. Several works were designed to handle the insertion process but fewer studies focused on processing the deletion process for knowledge maintenance. In this paper, we then develop a PRE-HAUI-DEL algorithm that utilizes the pre-large concept on HAUIM for handling transaction deletion in the dynamic databases. The pre-large concept is served as the buffer on HAUIM that reduces the number of database scans while the database is updated particularly in transaction deletion. Two upper-bound values are also established here to reduce the unpromising candidates early which can speed up the computational cost. From the experimental results, the designed PRE-HAUI-DEL algorithm is well performed compared to the Apriori-like model in terms of runtime, memory, and scalability in dynamic databases.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


2018 ◽  
Vol 2018 ◽  
pp. 1-12
Author(s):  
Yun-Hua Wu ◽  
Lin-Lin Ge ◽  
Feng Wang ◽  
Bing Hua ◽  
Zhi-Ming Chen ◽  
...  

In order to satisfy the real-time requirement of spacecraft autonomous navigation using natural landmarks, a novel algorithm called CSA-SURF (chessboard segmentation algorithm and speeded up robust features) is proposed to improve the speed without loss of repeatability performance of image registration progress. It is a combination of chessboard segmentation algorithm and SURF. Here, SURF is used to extract the features from satellite images because of its scale- and rotation-invariant properties and low computational cost. CSA is based on image segmentation technology, aiming to find representative blocks, which will be allocated to different tasks to speed up the image registration progress. To illustrate the advantages of the proposed algorithm, PCA-SURF, which is the combination of principle component analysis and SURF, is also analyzed in this paper for comparison. Furthermore, random sample consensus (RANSAC) algorithm is applied to eliminate the false matches for further accuracy improvement. The simulation results show that the proposed strategy obtains good results, especially in scaling and rotation variation. Besides, CSA-SURF decreased 50% of the time in extraction and 90% of the time in matching without losing the repeatability performance by comparing with SURF algorithm. The proposed method has been demonstrated as an alternative way for image registration of spacecraft autonomous navigation using natural landmarks.


1988 ◽  
Vol 66 (6) ◽  
pp. 467-470 ◽  
Author(s):  
Sikha Bhattacharyya ◽  
R. K. Roychoudhury

The effect of ion temperature on ion-acoustic solitary waves in the case of a two-ion plasma has been investigated using the pseudopotential approach of Sagdeev. An analytical solution for relatively small amplitudes has also been obtained. Our result has been compared, whenever possible, with the experimental result obtained by Nakamura. It is found that a finite ion temperature considerably modifies the restrictions on the Mach number obtained for cold ions.


Author(s):  
Franz Pichler ◽  
Gundolf Haase

A finite element code is developed in which all of the computationally expensive steps are performed on a graphics processing unit via the THRUST and the PARALUTION libraries. The code focuses on the simulation of transient problems where the repeated computations per time-step create the computational cost. It is used to solve partial and ordinary differential equations as they arise in thermal-runaway simulations of automotive batteries. The speed-up obtained by utilizing the graphics processing unit for every critical step is compared against the single core and the multi-threading solutions which are also supported by the chosen libraries. This way a high total speed-up on the graphics processing unit is achieved without the need for programming a single classical Compute Unified Device Architecture kernel.


2020 ◽  
Vol 493 (4) ◽  
pp. 5761-5772 ◽  
Author(s):  
Takumi Ohmura ◽  
Mami Machida ◽  
Kenji Nakamura ◽  
Yuki Kudoh ◽  
Ryoji Matsumoto

ABSTRACT We present the results of two-temperature magnetohydrodynamic simulations of the propagation of sub-relativistic jets of active galactic nuclei. The dependence of the electron and ion temperature distributions on the fraction of electron heating, fe, at the shock front is studied for fe = 0, 0.05, and 0.2. Numerical results indicate that in sub-relativistic, rarefied jets, the jet plasma crossing the terminal shock forms a hot, two-temperature plasma in which the ion temperature is higher than the electron temperature. The two-temperature plasma expands and forms a backflow referred to as a cocoon, in which the ion temperature remains higher than the electron temperature for longer than 100 Myr. Electrons in the cocoon are continuously heated by ions through Coulomb collisions, and the electron temperature thus remains at Te > 109 K in the cocoon. X-ray emissions from the cocoon are weak because the electron number density is low. Meanwhile, X-rays are emitted from the shocked intracluster medium (ICM) surrounding the cocoon. Mixing of the jet plasma and the shocked ICM through the Kelvin–Helmholtz instability at the interface enhances X-ray emissions around the contact discontinuity between the cocoon and shocked ICM.


2021 ◽  
Vol 28 (2) ◽  
pp. 163-182
Author(s):  
José L. Simancas-García ◽  
Kemel George-González

Shannon’s sampling theorem is one of the most important results of modern signal theory. It describes the reconstruction of any band-limited signal from a finite number of its samples. On the other hand, although less well known, there is the discrete sampling theorem, proved by Cooley while he was working on the development of an algorithm to speed up the calculations of the discrete Fourier transform. Cooley showed that a sampled signal can be resampled by selecting a smaller number of samples, which reduces computational cost. Then it is possible to reconstruct the original sampled signal using a reverse process. In principle, the two theorems are not related. However, in this paper we will show that in the context of Non Standard Mathematical Analysis (NSA) and Hyperreal Numerical System R, the two theorems are equivalent. The difference between them becomes a matter of scale. With the scale changes that the hyperreal number system allows, the discrete variables and functions become continuous, and Shannon’s sampling theorem emerges from the discrete sampling theorem.


Inventions ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 70
Author(s):  
Elena Solovyeva ◽  
Ali Abdullah

In this paper, the structure of a separable convolutional neural network that consists of an embedding layer, separable convolutional layers, convolutional layer and global average pooling is represented for binary and multiclass text classifications. The advantage of the proposed structure is the absence of multiple fully connected layers, which is used to increase the classification accuracy but raises the computational cost. The combination of low-cost separable convolutional layers and a convolutional layer is proposed to gain high accuracy and, simultaneously, to reduce the complexity of neural classifiers. Advantages are demonstrated at binary and multiclass classifications of written texts by means of the proposed networks under the sigmoid and Softmax activation functions in convolutional layer. At binary and multiclass classifications, the accuracy obtained by separable convolutional neural networks is higher in comparison with some investigated types of recurrent neural networks and fully connected networks.


Author(s):  
Waleed Shakeel ◽  
Ming Lu

Deriving a reliable earthwork job cost estimate entails analysis of the interaction of numerous variables defined in a highly complex and dynamic system. Using simulation to plan earthwork haul jobs delivers high accuracy in cost estimating. However, given practical limitations of time and expertise, simulation remains prohibitively expensive and rarely applied in the construction field. The development of a pragmatic tool for field applications that would mimic simulation-derived results while consuming less time was thus warranted. In this research, a spreadsheet based analytical tool was developed using data from industry benchmark databases (such as CAT Handbook and RSMeans). Based on a case study, the proposed methodology outperformed commonly used estimating methods and compared closely to the results obtained from simulation in controlled experiments.


2020 ◽  
Vol 20 (6) ◽  
pp. 116-125
Author(s):  
Nikolay Shegunov ◽  
Oleg Iliev

AbstractMultiLevel Monte Carlo (MLMC) attracts great interest for numerical simulations of Stochastic Partial Differential Equations (SPDEs), due to its superiority over the standard Monte Carlo (MC) approach. MLMC combines in a proper manner many cheap fast simulations with few slow and expensive ones, the variance is reduced, and a significant speed up is achieved. Simulations with MC/MLMC consist of three main components: generating random fields, solving deterministic problem and reduction of the variance. Each part is subject to a different degree of parallelism. Compared to the classical MC, MLMC introduces “levels” on which the sampling is done. These levels have different computational cost, thus, efficiently utilizing the parallel resources becomes a non-trivial problem. The main focus of this paper is the parallelization of the MLMC Algorithm.


Sign in / Sign up

Export Citation Format

Share Document