GS-OPT: A new fast stochastic algorithm for solving the non-convex optimization problem
<p>Non-convex optimization has an important role in machine learning. However, the theoretical understanding of non-convex optimization remained rather limited. Studying efficient algorithms for non-convex optimization has attracted a great deal of attention from many researchers around the world but these problems are usually NP-hard to solve. In this paper, we have proposed a new algorithm namely GS-OPT (General Stochastic OPTimization) which is effective for solving the non-convex problems. Our idea is to combine two stochastic bounds of the objective function where they are made by a commonly discrete probability distribution namely Bernoulli. We consider GS-OPT carefully on both the theoretical and experimental aspects. We also apply GS-OPT for solving the posterior inference problem in the latent Dirichlet allocation. Empirical results show that our approach is often more efficient than previous ones.</p>