Long-time calculation of the thermal magnetization reversal using Metropolis Monte Carlo

2002 ◽  
Vol 242-245 ◽  
pp. 1052-1056 ◽  
Author(s):  
O.A. Chubykalo ◽  
J. Kauffman ◽  
B. Lengsfield ◽  
R. Smirnov-Rueda
2018 ◽  
Vol 4 ◽  
pp. 45
Author(s):  
Julien Gaillet ◽  
Thomas Bonaccorsi ◽  
Gilles Noguere ◽  
Guillaume Truchet

Evaluating uncertainties on nuclear parameters such as reactivity is a major issue for conception of nuclear reactors. These uncertainties mainly come from the lack of knowledge on nuclear and technological data. Today, the common method used to propagate nuclear data uncertainties is Total Monte Carlo [1] but this method suffers from a long time calculation. Moreover, it requires as many calculations as uncertainties sought. An other method for the propagation of the nuclear data uncertainties consists in using the standard perturbation theory (SPT) to calculate reactivity sensitivity to the desire nuclear data. In such a method, sensitivities are combined with a priori nuclear data covariance matrices such as the COMAC set developed by CEA. The goal of this work is to calculate sensitivites by SPT with the full core diffusion code CRONOS2 for propagation uncertainties at the core level. In this study, COMAC nuclear data uncertainties have been propagated on the BEAVRS benchmark using a two-step APOLLO2/CRONOS2 scheme, where APOLLO2 is the lattice code used to resolve Boltzmann equation within assemblies using a high number of energy groups, and CRONOS2 is the code resolving the 3D full core diffusion equation using only four energy groups. A module implementing the SPT already exists in the APOLLO2 code but computational cost would be too expensive in 3D on the whole core. Consequently, an equivalent procedure has been created in CRONOS2 code to allow full-core uncertainty propagation. The main interest of this procedure is to compute sensitivities on reactivity within a reduced turnaround time for a 3D modeled core, even after fuel depletion. In addition, it allows access to all sensitivites by isotope, reaction and energy group in a single calculation. Reactivity sensitivities calculated by this procedure with four energy groups are compared to reference sensitivities calculated by the iterated fission probability (IFP) method in Monte Carlo code. For the purpose of the tests, dedicated covariance matrix have been created by condensation from 49 to 4 groups of the COMAC matrix. In conclusion, sensitivities calculated by CRONOS2 agree with the sensitivities calculated by the IFP method, which validates the calculation procedure, allowing analysis to be done quickly. In addition, reactivity uncertainty calculated by this method is close to values found for this type of reactor.


1987 ◽  
Vol 184 ◽  
pp. 123-155 ◽  
Author(s):  
Robert Krasny

Two vortex-sheet evolution problems arising in aerodynamics are studied numerically. The approach is based on desingularizing the Cauchy principal value integral which defines the sheet's velocity. Numerical evidence is presented which indicates that the approach converges with respect to refinement in the mesh-size and the smoothing parameter. For elliptic loading, the computed roll-up is in good agreement with Kaden's asymptotic spiral at early times. Some aspects of the solution's instability to short-wavelength perturbations, for a small value of the smoothing parameter, are inferred by comparing calculations performed with different levels of computer round-off error. The tip vortices’ deformation, due to their mutual interaction, is shown in a long-time calculation. Computations for a simulated fuselage-flap configuration show a complicated process of roll-up, deformation and interaction involving the tip vortex and the inboard neighbouring vortices.


2019 ◽  
Vol 25 (4) ◽  
pp. 329-340 ◽  
Author(s):  
Preston Hamlin ◽  
W. John Thrasher ◽  
Walid Keyrouz ◽  
Michael Mascagni

Abstract One method of computing the electrostatic energy of a biomolecule in a solution uses a continuum representation of the solution via the Poisson–Boltzmann equation. This can be solved in many ways, and we consider a Monte Carlo method of our design that combines the Walk-on-Spheres and Walk-on-Subdomains algorithms. In the course of examining the Monte Carlo implementation of this method, an issue was discovered in the Walk-on-Subdomains portion of the algorithm which caused the algorithm to sometimes take an abnormally long time to complete. As the problem occurs when a walker repeatedly oscillates between two subdomains, it is something that could cause a large increase in runtime for any method that used a similar algorithm. This issue is described in detail and a potential solution is examined.


10.14311/1041 ◽  
2008 ◽  
Vol 48 (4) ◽  
Author(s):  
K. Frydrýšek

This paper focuses on a numerical analysis of the hard rock (ore) disintegration process. The bit moves and sinks into the hard rock (mechanical contact with friction between the ore and the cutting bit) and subsequently disintegrates it. The disintegration (i.e. the stress-strain relationship, contact forces, reaction forces and fracture of the ore) is solved via the FEM (MSC.Marc/Mentat software) and SBRA (Simulation-Based Reliability Assessment) method (Monte Carlo simulations, Anthill and Mathcad software). The ore is disintegrated by deactivating the finite elements which satisfy the fracture condition. The material of the ore (i.e. yield stress, fracture limit, Young’s modulus and Poisson’s ratio), is given by bounded histograms (i.e. stochastic inputs which better describe reality). The results (reaction forces in the cutting bit) are also of stochastic quantity and they are compared with experimental measurements. Application of the SBRA method in this area is a modern and innovative trend in mechanics. However, it takes a long time to solve this problem (due to material and structural nonlinearities, the large number of elements, many iteration steps and many Monte Carlo simulations). Parallel computers were therefore used to handle the large computational needs of this problem. 


2019 ◽  
Vol 100 (8) ◽  
Author(s):  
Alice Moutenet ◽  
Priyanka Seth ◽  
Michel Ferrero ◽  
Olivier Parcollet

2020 ◽  
Vol 117 (27) ◽  
pp. 15394-15396
Author(s):  
Timothy W. Sirk

The chordless cycle sizes of spatially embedded networks are demonstrated to follow an exponential growth law similar to random graphs if the number of nodesNxis below a critical valueN*. For covalent polymer networks, increasing the network size, as measured by the number of cross-link nodes, beyondN*results in a crossover to a new regime in which the characteristic size of the chordless cyclesh*no longer increases. From this result, the onset and intensity of finite-size effects can be predicted from measurement ofh*in large networks. Although such information is largely inaccessible with experiments, the agreement of simulation results from molecular dynamics, Metropolis Monte Carlo, and kinetic Monte Carlo suggests the crossover is a fundamental physical feature which is insensitive to the details of the network generation. These results show random graphs as a promising model to capture structural differences in confined physical networks.


Sign in / Sign up

Export Citation Format

Share Document