scholarly journals Application of the Smooth Approximation of the Probability Function in Some Applied Stochastic Programming Problems

Author(s):  
V.R. Sobol ◽  
◽  
R.O. Torishnyy ◽  
◽  
2020 ◽  
pp. 180-217
Author(s):  
Vitaliy Sobol ◽  
Roman Torishnyi

In this paper we study one of the possible variants of smooth approximation of probability criteria in stochastic programming problems. The research is applied to the optimization problems of the probability function and the quantile function for the loss functional depending on the control vector and one-dimensional absolutely continuous random variable. In this paper we study one of the possible variants of smooth approximation of probability criteria in stochastic programming problems. The research is applied to the optimization problems of the probability function and the quantile function for the loss functional depending on the control vector and one-dimensional absolutely continuous random variable.  The main idea of the approximation is to replace the discontinuous Heaviside function in the integral representation of the probability function with a smooth function having such properties as continuity, smoothness, and easily computable derivatives. An example of such a function is the distribution function of a random variable distributed according to the logistic law with zero mean and finite dispersion, which is a sigmoid. The value inversely proportional to the root of the variance is a parameter that provides the proximity of the original function and its approximation. This replacement allows us to obtain a smooth approximation of the probability function, and for this approximation derivatives by the control vector and by other parameters of the problem can be easily found.  The article proves the convergence of the probability function approximation obtained by replacing the Heaviside function with the sigmoidal function to the original probability function, and the error estimate of such approximation is obtained. Next, approximate expressions for the derivatives of the probability function by the control vector and the parameter of the function are obtained, their convergence to the true derivatives is proved under a number of conditions for the loss functional. Using known relations between derivatives of probability functions and quantile functions, approximate expressions for derivatives of quantile function by control vector and by the level of probability are obtained. Examples are considered to demonstrate the possibility of applying the proposed estimates to the solution of stochastic programming problems with criteria in the form of a probability function and a quantile function, including in the case of a multidimensional random variable. 


2020 ◽  
Vol 19 (1) ◽  
pp. 180-217
Author(s):  
Vitaliy Sobol ◽  
Roman Torishnyi

In this paper we study one of the possible variants of smooth approximation of probability criteria in stochastic programming problems. The research is applied to the optimization problems of the probability function and the quantile function for the loss functional depending on the control vector and one-dimensional absolutely continuous random variable. In this paper we study one of the possible variants of smooth approximation of probability criteria in stochastic programming problems. The research is applied to the optimization problems of the probability function and the quantile function for the loss functional depending on the control vector and one-dimensional absolutely continuous random variable.  The main idea of the approximation is to replace the discontinuous Heaviside function in the integral representation of the probability function with a smooth function having such properties as continuity, smoothness, and easily computable derivatives. An example of such a function is the distribution function of a random variable distributed according to the logistic law with zero mean and finite dispersion, which is a sigmoid. The value inversely proportional to the root of the variance is a parameter that provides the proximity of the original function and its approximation. This replacement allows us to obtain a smooth approximation of the probability function, and for this approximation derivatives by the control vector and by other parameters of the problem can be easily found.  The article proves the convergence of the probability function approximation obtained by replacing the Heaviside function with the sigmoidal function to the original probability function, and the error estimate of such approximation is obtained. Next, approximate expressions for the derivatives of the probability function by the control vector and the parameter of the function are obtained, their convergence to the true derivatives is proved under a number of conditions for the loss functional. Using known relations between derivatives of probability functions and quantile functions, approximate expressions for derivatives of quantile function by control vector and by the level of probability are obtained. Examples are considered to demonstrate the possibility of applying the proposed estimates to the solution of stochastic programming problems with criteria in the form of a probability function and a quantile function, including in the case of a multidimensional random variable. 


Informatica ◽  
2015 ◽  
Vol 26 (4) ◽  
pp. 569-591 ◽  
Author(s):  
Valerijonas Dumskis ◽  
Leonidas Sakalauskas

Author(s):  
Jun Pei ◽  
Zheng Zheng ◽  
Hyunji Kim ◽  
Lin Song ◽  
Sarah Walworth ◽  
...  

An accurate scoring function is expected to correctly select the most stable structure from a set of pose candidates. One can hypothesize that a scoring function’s ability to identify the most stable structure might be improved by emphasizing the most relevant atom pairwise interactions. However, it is hard to evaluate the relevant importance for each atom pair using traditional means. With the introduction of machine learning methods, it has become possible to determine the relative importance for each atom pair present in a scoring function. In this work, we use the Random Forest (RF) method to refine a pair potential developed by our laboratory (GARF6) by identifying relevant atom pairs that optimize the performance of the potential on our given task. Our goal is to construct a machine learning (ML) model that can accurately differentiate the native ligand binding pose from candidate poses using a potential refined by RF optimization. We successfully constructed RF models on an unbalanced data set with the ‘comparison’ concept and, the resultant RF models were tested on CASF-2013.5 In a comparison of the performance of our RF models against 29 scoring functions, we found our models outperformed the other scoring functions in predicting the native pose. In addition, we used two artificial designed potential models to address the importance of the GARF potential in the RF models: (1) a scrambled probability function set, which was obtained by mixing up atom pairs and probability functions in GARF, and (2) a uniform probability function set, which share the same peak positions with GARF but have fixed peak heights. The results of accuracy comparison from RF models based on the scrambled, uniform, and original GARF potential clearly showed that the peak positions in the GARF potential are important while the well depths are not. <br>


Sign in / Sign up

Export Citation Format

Share Document