scholarly journals CONDITIONS OF MONOTONE APPROXIMATION OF RAMSEY CURVES AND THEIR MODIFICATIONS

Author(s):  
Daria Kurnosenko ◽  
Volodymyr Savchuk ◽  
Halyna Tuluchenko

The algorithm for approximating the experimental data of the Ramsey curve and its modifications has been developed, which provides a monotonic increase of the approximating function in the interval [0;\infty)  and an existence of a given number of inflection points. The Ramsey curve belongs to the family of logistic curves that are widely used in modeling of limited increasing processes in various subject fields. The classical Ramsey curve has two parameters and has a left constant asymmetry. It is also known that its three-parameter modification provides the possibility of displacement along the axes of ordinate. The extensive practical use of the Ramsey curve with both two and more parameters for approximating experimental dependences is restrained by the frequent loss by this curve of the logistic shape when approximating without additional restrictions on the relationships between its parameters. The article discusses modifications of the Ramsey curve with three and five parameters. The first and second derivatives of the studied modifications of the Ramsey function have a special structure. They are products of polynomial and exponential functions. This allows using Sturm's theorem on the number of polynomial roots in a given interval to control the shape of the approximating curve. It has been shown that with an increase in the number of parameters for the modified curve, the number of possible combinations of restrictions on the values of the parameters ensuring the preservation of its like shape increases significantly. The solution to the approximation problem in this case consists of solving a sequence of conditional global optimization problems with different constraints and choosing a solution that provides the smallest approximation error. Also, the studies of the accuracy of estimating the parameters of the Ramsey curve in accordance with the accuracy of the experimental data have been carried out. In order to simulate the presence of measurement errors, the values of a normally distributed random variable with a mathematical expectation equal to zero and different values of the standard deviation for different series of computational experiments were added to the values of the deterministic sequence. Computational experiments have shown a significant sensitivity of the values of the Ramsey function parameters to the measurement accuracy of experimental data.

2018 ◽  
Author(s):  
Saman Salike ◽  
Nirav Bhatt

AbstractMotivationThermodynamic analysis of biological reaction networks requires the availability of accurate and consistent values of Gibbs free energies of reaction and formation. These Gibbs energies can be measured directly via the careful design of experiments or can be computed from the curated Gibbs free energy databases. However, the computed Gibbs free energies of reactions and formations do not satisfy the thermodynamic constraints due to the compounding effect of measurement errors in the experimental data. The propagation of these errors can lead to a false prediction of pathway feasibility and uncertainty in the estimation of thermodynamic parameters.ResultsThis work proposes a data reconciliation framework for thermodynamically consistent estimation of Gibbs free energies of reaction, formation and group contributions from experimental data. In this framework, we formulate constrained optimization problems that reduce measurement errors and their effects on the estimation of Gibbs energies such that the thermodynamic constraints are satisfied. When a subset of Gibbs free energies of formations is unavailable, it is shown that the accuracy of their resulting estimates is better than that of existing empirical prediction methods. Moreover, we also show that the estimation of group contributions can be improved using this approach. Further, we provide guidelines based on this approach for performing systematic experiments to estimate unknown Gibbs formation energies.AvailabilityThe MATLAB code for the executing the proposed algorithm is available for free on the GitHub repository:https://github.com/samansalike/[email protected]


2021 ◽  
Vol 19 (1) ◽  
pp. 284-296
Author(s):  
Hye Kyung Kim

Abstract Many mathematicians have studied degenerate versions of quite a few special polynomials and numbers since Carlitz’s work (Utilitas Math. 15 (1979), 51–88). Recently, Kim et al. studied the degenerate gamma random variables, discrete degenerate random variables and two-variable degenerate Bell polynomials associated with Poisson degenerate central moments, etc. This paper is divided into two parts. In the first part, we introduce a new type of degenerate Bell polynomials associated with degenerate Poisson random variables with parameter α > 0 \alpha \hspace{-0.15em}\gt \hspace{-0.15em}0 , called the fully degenerate Bell polynomials. We derive some combinatorial identities for the fully degenerate Bell polynomials related to the n n th moment of the degenerate Poisson random variable, special numbers and polynomials. In the second part, we consider the fully degenerate Bell polynomials associated with degenerate Poisson random variables with two parameters α > 0 \alpha \gt 0 and β > 0 \beta \hspace{-0.15em}\gt \hspace{-0.15em}0 , called the two-variable fully degenerate Bell polynomials. We show their connection with the degenerate Poisson central moments, special numbers and polynomials.


Author(s):  
Stefano Massei

AbstractVarious applications in numerical linear algebra and computer science are related to selecting the $$r\times r$$ r × r submatrix of maximum volume contained in a given matrix $$A\in \mathbb R^{n\times n}$$ A ∈ R n × n . We propose a new greedy algorithm of cost $$\mathcal O(n)$$ O ( n ) , for the case A symmetric positive semidefinite (SPSD) and we discuss its extension to related optimization problems such as the maximum ratio of volumes. In the second part of the paper we prove that any SPSD matrix admits a cross approximation built on a principal submatrix whose approximation error is bounded by $$(r+1)$$ ( r + 1 ) times the error of the best rank r approximation in the nuclear norm. In the spirit of recent work by Cortinovis and Kressner we derive some deterministic algorithms, which are capable to retrieve a quasi optimal cross approximation with cost $$\mathcal O(n^3)$$ O ( n 3 ) .


2019 ◽  
Vol 23 (Suppl. 6) ◽  
pp. 1901-1908
Author(s):  
Mehmet Gurcan ◽  
Arzu Demirelli

The distribution of the data is very important in all of the parametric methods used in the applied statistics. More clearly, if the experimental data fit well to the theoretical distribution, the results will be more efficient in parametric methods. The adaptability of experimental data to a theoretical distribution depends on the flexibility of the theoretical distribution used. If the flexibility of the theoretical distribution is sufficient, it can be used easily for experimental data. Most of the theoretical distributions have shape and location parameters. However, these two parameters are not always sufficient for the distribution adapt to the experimental data. Therefore, theoretical distributions with high flexibility in parametric methods are needed. Obtaining the new theoretical distributions that provide this feature is important for the literature. In this study, a new probability distribution has been obtained via Richard link function which has been high flexibility. In the introduction, important information is given related to growth models and Richard growth curve. Later, some details about the Richard distribution and wrapped distribution have been given.


Author(s):  
Bar Light

In multiperiod stochastic optimization problems, the future optimal decision is a random variable whose distribution depends on the parameters of the optimization problem. I analyze how the expected value of this random variable changes as a function of the dynamic optimization parameters in the context of Markov decision processes. I call this analysis stochastic comparative statics. I derive both comparative statics results and stochastic comparative statics results showing how the current and future optimal decisions change in response to changes in the single-period payoff function, the discount factor, the initial state of the system, and the transition probability function. I apply my results to various models from the economics and operations research literature, including investment theory, dynamic pricing models, controlled random walks, and comparisons of stationary distributions.


Author(s):  
Jing Qiu ◽  
Jiguo Yu ◽  
Shujun Lian

In this paper, we propose a new non-smooth penalty function with two parameters for nonlinear inequality constrained optimization problems. And we propose a twice continuously differentiable function which is smoothing approximation to the non-smooth penalty function and define the corresponding smoothed penalty problem. A global solution of the smoothed penalty problem is proved to be an approximation global solution of the non-smooth penalty problem. Based on the smoothed penalty function, we develop an algorithm and prove that the sequence generated by the algorithm can converge to the optimal solution of the original problem.


2019 ◽  
Vol 09 (06) ◽  
pp. 1950046
Author(s):  
C. L. Wang

Two parameters are proposed as Jonscher indices, named after A. K. Jonscher for his pioneering contribution to the universal dielectric relaxation law. Time domain universal dielectric relaxation law is then obtained from the asymptotic behavior of dielectric response function and relaxation function by replacing parameters in Mittag–Leffler functions with Jonscher indices. Relaxation types can be easily determined from experimental data of discharge current in barium stannate titanate after their Jonscher indices are determined.


Information ◽  
2019 ◽  
Vol 10 (12) ◽  
pp. 390 ◽  
Author(s):  
Ahmad Hassanat ◽  
Khalid Almohammadi ◽  
Esra’a Alkafaween ◽  
Eman Abunawas ◽  
Awni Hammouri ◽  
...  

Genetic algorithm (GA) is an artificial intelligence search method that uses the process of evolution and natural selection theory and is under the umbrella of evolutionary computing algorithm. It is an efficient tool for solving optimization problems. Integration among (GA) parameters is vital for successful (GA) search. Such parameters include mutation and crossover rates in addition to population that are important issues in (GA). However, each operator of GA has a special and different influence. The impact of these factors is influenced by their probabilities; it is difficult to predefine specific ratios for each parameter, particularly, mutation and crossover operators. This paper reviews various methods for choosing mutation and crossover ratios in GAs. Next, we define new deterministic control approaches for crossover and mutation rates, namely Dynamic Decreasing of high mutation ratio/dynamic increasing of low crossover ratio (DHM/ILC), and Dynamic Increasing of Low Mutation/Dynamic Decreasing of High Crossover (ILM/DHC). The dynamic nature of the proposed methods allows the ratios of both crossover and mutation operators to be changed linearly during the search progress, where (DHM/ILC) starts with 100% ratio for mutations, and 0% for crossovers. Both mutation and crossover ratios start to decrease and increase, respectively. By the end of the search process, the ratios will be 0% for mutations and 100% for crossovers. (ILM/DHC) worked the same but the other way around. The proposed approach was compared with two parameters tuning methods (predefined), namely fifty-fifty crossover/mutation ratios, and the most common approach that uses static ratios such as (0.03) mutation rates and (0.9) crossover rates. The experiments were conducted on ten Traveling Salesman Problems (TSP). The experiments showed the effectiveness of the proposed (DHM/ILC) when dealing with small population size, while the proposed (ILM/DHC) was found to be more effective when using large population size. In fact, both proposed dynamic methods outperformed the predefined methods compared in most cases tested.


2017 ◽  
Vol 36 (2) ◽  
pp. 423-441 ◽  
Author(s):  
Lizhen Shao ◽  
Fangyuan Zhao ◽  
Guangda Hu

Abstract In this article, a numerical method for the approximation of reachable sets of linear control systems is discussed. First a continuous system is transformed into a discrete one with Runge–Kutta methods. Then based on Benson’s outer approximation algorithm for solving multiobjective optimization problems, we propose a variant of Benson’s algorithm to sandwich the reachable set of the discrete system with an inner approximation and an outer approximation. By specifying an approximation error, the quality of the approximations measured in Hausdorff distance can be directly controlled. Furthermore, we use an illustrative example to demonstrate the working of the algorithm. Finally, computational experiments illustrate the superior performance of our proposed algorithm compared to a recent algorithm in the literature.


Sign in / Sign up

Export Citation Format

Share Document