scholarly journals Comparative Analysis of Woods-Saxon and Yukawa Model Nuclear Potentials

Author(s):  
O.S.K.S. Sastri ◽  
Aditi Sharma ◽  
Swapna Gora ◽  
Richa Sharma

In this paper, we model the nuclear potential using Woods-Saxon and Yukawa interaction as the mean field in which each nucleon experiences a central force due to rest of the nucleons. The single particle energy states are obtained by solving the time independent Schrodinger wave equation using matrix diagonalization method with infinite spherical well wave-functions as the basis. The best fit model parameters are obtained by using variational Monte-Carlo algorithm wherein the relative mean-squared error, christened as chi-squared value, is minimized. The universal parameters obtained using Woods-Saxon potential are found to be matched with literature reported data resulting a chi-square value of 0.066 for neutron states and 0.069 for proton states whereas the chi-square value comes out to be 1.98 and 1.57 for neutron and proton states respectively by considering Yukawa potential. To further assess the performance of both the interaction potentials, the model parameters have been optimized for three different groups, light nuclei up to 16O - 56Ni, heavy nuclei 100Sn - 208Pb and all nuclei 16O - 208Pb. It is observed that Yukawa model performed reasonably well for light nuclei but did not give satisfactory results for the other two groups while Woods-Saxon potential gives satisfactory results for all magic nuclei across the periodic table. 

2019 ◽  
Vol 52 (1) ◽  
pp. 193-200 ◽  
Author(s):  
Andrew R. J. Nelson ◽  
Stuart W. Prescott

refnx is a model-based neutron and X-ray reflectometry data analysis package written in Python. It is cross platform and has been tested on Linux, macOS and Windows. Its graphical user interface is browser based, through a Jupyter notebook. Model construction is modular, being composed from a series of components that each describe a subset of the interface, parameterized in terms of physically relevant parameters (volume fraction of a polymer, lipid area per molecule etc.). The model and data are used to create an objective, which is used to calculate the residuals, log-likelihood and log-prior probabilities of the system. Objectives are combined to perform co-refinement of multiple data sets and mixed-area models. Prior knowledge of parameter values is encoded as probability distribution functions or bounds on all parameters in the system. Additional prior probability terms can be defined for sets of components, over and above those available from the parameters alone. Algebraic parameter constraints are available. The software offers a choice of fitting approaches, including least-squares (global and gradient-based optimizers) and a Bayesian approach using a Markov-chain Monte Carlo algorithm to investigate the posterior distribution of the model parameters. The Bayesian approach is useful for examining parameter covariances, model selection and variability in the resulting scattering length density profiles. The package is designed to facilitate reproducible research; its use in Jupyter notebooks, and subsequent distribution of those notebooks as supporting information, permits straightforward reproduction of analyses.


2020 ◽  
Vol 17 (173) ◽  
pp. 20200886
Author(s):  
L. Mihaela Paun ◽  
Mitchel J. Colebank ◽  
Mette S. Olufsen ◽  
Nicholas A. Hill ◽  
Dirk Husmeier

This study uses Bayesian inference to quantify the uncertainty of model parameters and haemodynamic predictions in a one-dimensional pulmonary circulation model based on an integration of mouse haemodynamic and micro-computed tomography imaging data. We emphasize an often neglected, though important source of uncertainty: in the mathematical model form due to the discrepancy between the model and the reality, and in the measurements due to the wrong noise model (jointly called ‘model mismatch’). We demonstrate that minimizing the mean squared error between the measured and the predicted data (the conventional method) in the presence of model mismatch leads to biased and overly confident parameter estimates and haemodynamic predictions. We show that our proposed method allowing for model mismatch, which we represent with Gaussian processes, corrects the bias. Additionally, we compare a linear and a nonlinear wall model, as well as models with different vessel stiffness relations. We use formal model selection analysis based on the Watanabe Akaike information criterion to select the model that best predicts the pulmonary haemodynamics. Results show that the nonlinear pressure–area relationship with stiffness dependent on the unstressed radius predicts best the data measured in a control mouse.


2020 ◽  
Vol 2 (6) ◽  
Author(s):  
E. S. William ◽  
J. A. Obu ◽  
I. O. Akpan ◽  
E. A. Thompson ◽  
E. P. Inyang

The analytical solutions of the radial D-dimensional Schrödinger equation for the Yukawa potential plus spin-orbit and Coulomb interaction terms are presented within the framework of the Nikiforov-Uvarov method by using the Greene-Aldrich approximation scheme to the centrifugal barrier. The energy eigenvalues obtained are employed to calculate the single-energy spectrum of ⁵⁶Ni and ¹¹⁶Sn for distinct quantum states. We have also obtained corresponding normalized wave functions for the magic nuclei manifested in terms of Jacobi polynomials. However, the energy spectrum without Spin-orbit and Coulomb interaction terms precisely matches the quantum mechanical system of the Yukawa potential field at any arbitrary state.


2018 ◽  
Vol 218 ◽  
pp. 01007 ◽  
Author(s):  
Erwin Nashrullah ◽  
Abdul Halim

Analysing and simulating the dynamic behaviour of home power system as a part of community-based energy system needs load model of either aggregate or dis-aggregate power use. Moreover, in the context of home energy efficiency, development of specific and accurate residential load model can help system designer to develop a tool for reducing energy consumption effectively. In this paper, a new method for developing two types of residential polynomial load model is presented. In the research, computation technique of model parameters is provided based on median filter and least square estimation and implemented by MATLAB. We use AMPDs data set, which have 1-minute data sampling, to show the effectiveness of proposed method. After simulation is carried out, the performance evaluation of model is provided through exploring root mean-squared error between original data and model output. From simulation results, it could be concluded that proposed model is enough for helping system designer to analyse home power energy use.


2020 ◽  
Vol 77 (3) ◽  
pp. 439-450 ◽  
Author(s):  
Andrea M.J. Perreault ◽  
Nan Zheng ◽  
Noel G. Cadigan

Response-selective stratified sampling (RSSS) has been well studied in the statistical literature; however, the application of the resulting statistical theories and methods to a specific case of RSSS in fisheries studies, namely length-stratified age sampling (LSAS), is inadequate. We review nine estimation approaches for RSSS found in the statistical and fisheries science literature in terms of three sampling components: the first phase length composition sample, the second phase age composition sample, and the sampling scheme. We compare the performance in terms of RRMSE (relative root mean squared error) for von Bertalanffy (vonB) growth model parameter estimation using an extensive simulation study. We further demonstrate methods by applying the two best-performing and the most popular methods to estimate the vonB model parameters for American plaice (Hippoglossoides platessoides) in NAFO Divisions 3LNO. Our simulations demonstrated that mis-specifying one or more of the three sampling components increases the RRMSEs, and this effect is magnified when the age distribution is incorrectly specified. The optimal approach for data based on LSAS is the empirical proportion approach, and we recommend this method for growth parameter estimation based on LSAS data.


Author(s):  
Gong Li ◽  
Jing Shi

Reliable short-term predictions of the wind power production are critical for both wind farm operations and power system management, where the time scales can vary in the order of several seconds, minutes, hours and days. This comprehensive study mainly aims to quantitatively evaluate and compare the performances of different Box & Jenkins models and backpropagation (BP) neural networks in forecasting the wind power production one-hour ahead. The data employed is the hourly power outputs of an N.E.G. Micon 900-kilowatt wind turbine, which is installed to the east of Valley City, North Dakota. For each type of Box & Jenkins models tested, the model parameters are estimated to determine the corresponding optimal models. For BP network models, different input layer sizes, hidden layer sizes, and learning rates are examined. The evaluation metrics are mean absolute error and root mean squared error. Besides, the persistence model is also employed for purpose of comparison. The results show that in general both best performing Box & Jenkins and BP models can provide better forecasts than the persistence model, while the difference between the Box & Jenkins and BP models is actually insignificant.


2019 ◽  
Vol 28 (06) ◽  
pp. 1950045 ◽  
Author(s):  
B. Nandana ◽  
R. Rahul ◽  
S. Mahadevan

[Formula: see text]-value and half-life of elements in alpha decay chain of [Formula: see text]117, [Formula: see text]117, [Formula: see text]116 and [Formula: see text]116 were calculated using the Nuclear potential generated by double folding procedure and using the WKB method treating the alpha decay as a tunneling problem. The nuclear potential was parameterized using Woods–Saxon potential. Using this approach, the [Formula: see text]-value and half-life of next heaviest element in the alpha decay chain of element [Formula: see text]116 is predicted. It is proposed to use this to predict the [Formula: see text]-value and half-life of other higher elements in different alpha decay chains.


2019 ◽  
Vol 16 (12) ◽  
pp. 1950190
Author(s):  
Saira Waheed

In this work, we develop some interesting models of cosmos exhibiting anisotropic properties in the extended scalar-tensor theory. In the first place, we consider the LRS Bianchi type I (BI) geometry filled with matter contents as magnetized bulk viscous cloud of strings. We developed analytic solutions and explore the cosmological significance of some interesting physical measures like cosmic volume, directional Hubble parameter, deceleration parameter, viscosity factor, particle energy density, shear and expansion scalars, and string tension density. Moreover, modified holographic Ricci dark energy is introduced in anisotropic scenario to discuss the dynamics of anisotropic comic models. In order to construct the exact cosmic solutions, we take hybrid law of scale factor as well as some viable ansatz for scalar field and its scalar potential. The physical viability of model parameters is discussed through graphical analysis. Physical analysis of both models show that our results are in agreement with the current observations and hence are cosmologically viable and promising.


2008 ◽  
Vol 15 (1) ◽  
pp. 221-232 ◽  
Author(s):  
A. J. Cannon ◽  
W. W. Hsieh

Abstract. Robust variants of nonlinear canonical correlation analysis (NLCCA) are introduced to improve performance on datasets with low signal-to-noise ratios, for example those encountered when making seasonal climate forecasts. The neural network model architecture of standard NLCCA is kept intact, but the cost functions used to set the model parameters are replaced with more robust variants. The Pearson product-moment correlation in the double-barreled network is replaced by the biweight midcorrelation, and the mean squared error (mse) in the inverse mapping networks can be replaced by the mean absolute error (mae). Robust variants of NLCCA are demonstrated on a synthetic dataset and are used to forecast sea surface temperatures in the tropical Pacific Ocean based on the sea level pressure field. Results suggest that adoption of the biweight midcorrelation can lead to improved performance, especially when a strong, common event exists in both predictor/predictand datasets. Replacing the mse by the mae leads to improved performance on the synthetic dataset, but not on the climate dataset except at the longest lead time, which suggests that the appropriate cost function for the inverse mapping networks is more problem dependent.


2015 ◽  
Vol 30 (30) ◽  
pp. 1550150 ◽  
Author(s):  
F. Saidi ◽  
M. R. Oudih ◽  
M. Fellah ◽  
N. H. Allal

The cluster decay process is studied in the WKB approximation based on the unified fission model. The cluster is considered to be emitted by tunneling through a potential barrier taken as the sum of the Coulomb potential, the centrifugal potential and the modified Woods–Saxon (MWS) nuclear potential. The results of our calculations are compared to those obtained by other theoretical models as well as experimental data. It is shown that the unified fission model with the MWS nuclear potential can be successfully used to evaluate the cluster decay half-lives of heavy nuclei.


Sign in / Sign up

Export Citation Format

Share Document