scholarly journals Automated handling of complex chemical structures in Z-matrix coordinates - the chemcoord library

Author(s):  
Oskar Weser ◽  
Björn Hein Hanke ◽  
Ricardo Mata

In this work, we present a fully automated method for the construction of chemically meaningful sets of non-redundant internal coordinates (also commonly denoted as Z-matrices) from the cartesian coordinates of a molecular system. Particular focus is placed on avoiding ill-definitions of angles and dihedrals due to linear arrangements of atoms, to consistently guarantee a well-defined transformation to cartesian coordinates, even after structural changes. The representations thus obtained are particularly well suited for pathway construction in double-ended methods for transition state search and optimisations with non-linear constraints. Analytical gradients for the transformation between the coordinate systems were derived for the first time, which allows analytical geometry optimizations purely in Z-matrix coordinates. The geometry optimisation was coupled with a Symbolic Algebra package to support arbitrary non-linear constraints in Z-matrix coordinates, while retaining analytical energy gradient conversion. Sample applications are provided for a number of common chemical reactions and illustrative examples where these new algorithms can be used to automatically produce chemically reasonable structure interpolations, or to perform non-linearly constrained optimisations of molecules.

2021 ◽  
pp. 1-28
Author(s):  
Yuan Shen ◽  
Yannian Zuo ◽  
Liming Sun ◽  
Xiayang Zhang

We consider the linearly constrained separable convex optimization problem whose objective function is separable with respect to [Formula: see text] blocks of variables. A bunch of methods have been proposed and extensively studied in the past decade. Specifically, a modified strictly contractive Peaceman–Rachford splitting method (SC-PRCM) [S. H. Jiang and M. Li, A modified strictly contractive Peaceman–Rachford splitting method for multi-block separable convex programming, J. Ind. Manag. Optim. 14(1) (2018) 397-412] has been well studied in the literature for the special case of [Formula: see text]. Based on the modified SC-PRCM, we present modified proximal symmetric ADMMs (MPSADMMs) to solve the multi-block problem. In MPSADMMs, all subproblems but the first one are attached with a simple proximal term, and the multipliers are updated twice. At the end of each iteration, the output is corrected via a simple correction step. Without stringent assumptions, we establish the global convergence result and the [Formula: see text] convergence rate in the ergodic sense for the new algorithms. Preliminary numerical results show that our proposed algorithms are effective for solving the linearly constrained quadratic programming and the robust principal component analysis problems.


Author(s):  
Ion Necoara ◽  
Martin Takáč

Abstract In this paper we consider large-scale smooth optimization problems with multiple linear coupled constraints. Due to the non-separability of the constraints, arbitrary random sketching would not be guaranteed to work. Thus, we first investigate necessary and sufficient conditions for the sketch sampling to have well-defined algorithms. Based on these sampling conditions we develop new sketch descent methods for solving general smooth linearly constrained problems, in particular, random sketch descent (RSD) and accelerated random sketch descent (A-RSD) methods. To our knowledge, this is the first convergence analysis of RSD algorithms for optimization problems with multiple non-separable linear constraints. For the general case, when the objective function is smooth and non-convex, we prove for the non-accelerated variant sublinear rate in expectation for an appropriate optimality measure. In the smooth convex case, we derive for both algorithms, non-accelerated and A-RSD, sublinear convergence rates in the expected values of the objective function. Additionally, if the objective function satisfies a strong convexity type condition, both algorithms converge linearly in expectation. In special cases, where complexity bounds are known for some particular sketching algorithms, such as coordinate descent methods for optimization problems with a single linear coupled constraint, our theory recovers the best known bounds. Finally, we present several numerical examples to illustrate the performances of our new algorithms.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Sha Lu ◽  
Zengxin Wei

Proximal point algorithm is a type of method widely used in solving optimization problems and some practical problems such as machine learning in recent years. In this paper, a framework of accelerated proximal point algorithm is presented for convex minimization with linear constraints. The algorithm can be seen as an extension to G u ¨ ler’s methods for unconstrained optimization and linear programming problems. We prove that the sequence generated by the algorithm converges to a KKT solution of the original problem under appropriate conditions with the convergence rate of O 1 / k 2 .


Sign in / Sign up

Export Citation Format

Share Document