numerical algorithm
Recently Published Documents


TOTAL DOCUMENTS

1235
(FIVE YEARS 244)

H-INDEX

44
(FIVE YEARS 7)

2021 ◽  
Vol 6 (1) ◽  
pp. 9
Author(s):  
Mohamed M. Al-Shomrani ◽  
Mohamed A. Abdelkawy

The advection–dispersion equations have gotten a lot of theoretical attention. The difficulty in dealing with these problems stems from the fact that there is no perfect answer and that tackling them using local numerical methods is tough. The Riesz fractional advection–dispersion equations are quantitatively studied in this research. The numerical methodology is based on the collocation approach and a simple numerical algorithm. To show the technique’s performance and competency, a comprehensive theoretical formulation is provided, along with numerical examples.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Tamour Zubair ◽  
Muhammad Usman ◽  
Tiao Lu

PurposeThe purpose of this offered research is to articulate a multifaceted kind of highly unstable initial perturbation and further analyze the performance of the plasma particles for time-fractional order evaluation.Design/methodology/approachFor this purpose, the authors designed specific geometry and further interpreted it into the mathematical model using the concepts of the Vlasov Maxwell system. The suggested algorithm is based on the finite-difference and spectral estimation philosophy. The management of time and memory in generic code for computational purposes is also discussed.FindingsThe main purpose is to analyze the fractional behavior of plasma particles and also the capability of the suggested numerical algorithm. Due to initial perturbations, there are a lot of sudden variations that occurred in the formulated system. Graphical behavior shows that SR parameter produces devastation as compared to others. The variation of fractional parameter between the defend domain demonstrates the hidden pictures of plasma particles. The design scheme is efficient, convergent and has the capability to cover the better physics of the problem.Practical implicationsPlasma material is commonly used in different areas of science. Therefore, in this paper, the authors increase the capability of the mathematical plasma model with specific geometry, and further suitable numerical algorithm is suggested with detailed physical analysis of the outcomes. The authors gave a new direction to study the performance of plasma particles under the influence of LASER light.Originality/valueIn the recent era, science has produced a lot of advancements to study and analyze the physical natural process, which exist everywhere in the real word. On behalf of this current developments, it is now insufficient to study the first-order time evaluation of the plasma particles. One needs to be more precise and should move toward the bottomless state of it, that is, macroscopic and microscopic time-evaluation scales, and it is not wrong to say that there exits a huge gap, to study the time evaluation in this discussed manner. The presented study is entirely an advanced and efficient way to investigate the problem into the new directions. The capability of the proposed algorithm and model with fractional concepts can fascinate the reader to extend to the other dimensions.


Author(s):  
Emmanuel Agullo ◽  
Mirco Altenbernd ◽  
Hartwig Anzt ◽  
Leonardo Bautista-Gomez ◽  
Tommaso Benacchio ◽  
...  

This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.


2021 ◽  
Vol 2131 (5) ◽  
pp. 052075
Author(s):  
M Khudjaev ◽  
A Rakhimov

Abstract The topic of research is gas flow modeling in wells. The subject of the study is to determine the dynamic parameters of gas in a gas well, taking into account changes in the ambient temperature and gravity. Mathematical and numerical modeling of gas flow in a gas well is performed; a numerical algorithm to determine gas pressure in a gas well is built. This algorithm allows studying the state of production and injection wells with varying conditions at the wellhead and at the lower end of the well. Research methods are based on the energy equations of the transported gas; the mass conservation equation, which are the basic equations of gas flow; the methods of numerical and mathematical modeling. In the article, numerical and mathematical models of gas flow in a gas well are obtained, taking into account changes in the ambient temperature and gravity. A numerical algorithm and a program were built to determine the gas-dynamic characteristics of wells. The computational process was based on the “cycle in cycle” principle. Provisions were made to study the state of production and injection wells with varying conditions at the wellhead and at the bottom end of the well.


Sign in / Sign up

Export Citation Format

Share Document