scholarly journals The Convergence Rate of High-Dimensional Sample Quantiles for φ-Mixing Observation Sequences

Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 647
Author(s):  
Ling Peng ◽  
Dong Han

In this paper, we obtain the convergence rate for the high-dimensional sample quantiles with the φ-mixing dependent sequence. The resulting convergence rate is shown to be faster than that obtained by the Hoeffding-type inequalities. Moreover, the convergence rate of the high-dimensional sample quantiles for the observation sequence taking discrete values is also provided.

2019 ◽  
Vol 27 (4) ◽  
pp. 699-725 ◽  
Author(s):  
Hao Wang ◽  
Michael Emmerich ◽  
Thomas Bäck

Generating more evenly distributed samples in high dimensional search spaces is the major purpose of the recently proposed mirrored sampling technique for evolution strategies. The diversity of the mutation samples is enlarged and the convergence rate is therefore improved by the mirrored sampling. Motivated by the mirrored sampling technique, this article introduces a new derandomized sampling technique called mirrored orthogonal sampling. The performance of this new technique is both theoretically analyzed and empirically studied on the sphere function. In particular, the mirrored orthogonal sampling technique is applied to the well-known Covariance Matrix Adaptation Evolution Strategy (CMA-ES). The resulting algorithm is experimentally tested on the well-known Black-Box Optimization Benchmark (BBOB). By comparing the results from the benchmark, mirrored orthogonal sampling is found to outperform both the standard CMA-ES and its variant using mirrored sampling.


2017 ◽  
Vol 32 (4) ◽  
pp. 603-614
Author(s):  
Yi Wu ◽  
Xuejun Wang ◽  
Shuhe Hu

In this paper, we mainly study the moderate deviation principle of sample quantiles and order statistics for stationary m-dependent random variables. The results obtained in this paper extend the corresponding ones for an independent and identically distributed sequence to a stationary m-dependent sequence.


2020 ◽  
Vol 34 (03) ◽  
pp. 2425-2432
Author(s):  
Hung Tran-The ◽  
Sunil Gupta ◽  
Santu Rana ◽  
Svetha Venkatesh

Scaling Bayesian optimisation (BO) to high-dimensional search spaces is a active and open research problems particularly when no assumptions are made on function structure. The main reason is that at each iteration, BO requires to find global maximisation of acquisition function, which itself is a non-convex optimization problem in the original search space. With growing dimensions, the computational budget for this maximisation gets increasingly short leading to inaccurate solution of the maximisation. This inaccuracy adversely affects both the convergence and the efficiency of BO. We propose a novel approach where the acquisition function only requires maximisation on a discrete set of low dimensional subspaces embedded in the original high-dimensional search space. Our method is free of any low dimensional structure assumption on the function unlike many recent high-dimensional BO methods. Optimising acquisition function in low dimensional subspaces allows our method to obtain accurate solutions within limited computational budget. We show that in spite of this convenience, our algorithm remains convergent. In particular, cumulative regret of our algorithm only grows sub-linearly with the number of iterations. More importantly, as evident from our regret bounds, our algorithm provides a way to trade the convergence rate with the number of subspaces used in the optimisation. Finally, when the number of subspaces is "sufficiently large", our algorithm's cumulative regret is at most O*(√TγT) as opposed to O*(√DTγT) for the GP-UCB of Srinivas et al. (2012), reducing a crucial factor √D where D being the dimensional number of input space. We perform empirical experiments to evaluate our method extensively, showing that its sample efficiency is better than the existing methods for many optimisation problems involving dimensions up to 5000.


Sign in / Sign up

Export Citation Format

Share Document