convergence result
Recently Published Documents


TOTAL DOCUMENTS

415
(FIVE YEARS 126)

H-INDEX

27
(FIVE YEARS 5)

Author(s):  
Abdelouahed Kouibia ◽  
Miguel Pasadas

AbstractStandard Offset surfaces are defined as locus of the points which are at constant distance along the unit normal direction from the generator surfaces. Offset are widely used in various practical applications, such as tolerance analysis, geometric optics and robot path-planning. In some of the engineering applications, we need to extend the concept of standard offset to the generalized offset where distance offset is not necessarily constant and offset direction are not necessarily along the normal direction. Normally, a generalized offset is functionally more complex than its progenitor because of the square root appears in the expression of the unit normal vector. For this, an approximation method of its construction is necessary. In many situation it is necessary to fill or reconstruct certain function defined in a domain in which there is a lack of information inside one or several sub-domains (holes). In some practical cases, we may have some specific geometrical constrains, of industrial or design type, for example, the case of a specified volume inside each one of these holes. The problem of filling holes or completing a 3D surface arises in all sorts of computational graphics areas, like CAGD, CAD-CAM, Earth Sciences, computer vision in robotics, image reconstruction from satellite and radar information, etc. In this work we present an approximation method of filling holes of the generalized offset of a surface when there is a lack information in a sub-domain of the function that define it. We prove the existence and uniqueness of solution of this problem, we show how to compute it and we establish a convergence result of this approximation method. Finally, we give some graphical and numerical examples.


2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Fouzia Amir ◽  
Ali Farajzadeh ◽  
Jehad Alzabut

Abstract Multiobjective optimization is the optimization with several conflicting objective functions. However, it is generally tough to find an optimal solution that satisfies all objectives from a mathematical frame of reference. The main objective of this article is to present an improved proximal method involving quasi-distance for constrained multiobjective optimization problems under the locally Lipschitz condition of the cost function. An instigation to study the proximal method with quasi distances is due to its widespread applications of the quasi distances in computer theory. To study the convergence result, Fritz John’s necessary optimality condition for weak Pareto solution is used. The suitable conditions to guarantee that the cluster points of the generated sequences are Pareto–Clarke critical points are provided.


2022 ◽  
Vol 27 ◽  
pp. 1-22
Author(s):  
Yun-hua Weng ◽  
Tao Chen ◽  
Nan-jing Huang ◽  
Donal O'Regan

We consider a new fractional impulsive differential hemivariational inequality, which captures the required characteristics of both the hemivariational inequality and the fractional impulsive differential equation within the same framework. By utilizing a surjectivity theorem and a fixed point theorem we establish an existence and uniqueness theorem for such a problem. Moreover, we investigate the perturbation problem of the fractional impulsive differential hemivariational inequality to prove a convergence result, which describes the stability of the solution in relation to perturbation data. Finally, our main results are applied to obtain some new results for a frictional contact problem with the surface traction driven by the fractional impulsive differential equation.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 80
Author(s):  
Jun Huo ◽  
Jielan Yang ◽  
Guoxin Wang ◽  
Shengwei Yao

In this paper, a three-parameter subspace conjugate gradient method is proposed for solving large-scale unconstrained optimization problems. By minimizing the quadratic approximate model of the objective function on a new special three-dimensional subspace, the embedded parameters are determined and the corresponding algorithm is obtained. The global convergence result of a given method for general nonlinear functions is established under mild assumptions. In numerical experiments, the proposed algorithm is compared with SMCG_NLS and SMCG_Conic, which shows that the given algorithm is robust and efficient.


2021 ◽  
pp. 1-20
Author(s):  
Shengda Zeng ◽  
Stanisław Migórski ◽  
Domingo A. Tarzia

The goal of this paper is to investigate a new class of elliptic mixed boundary value problems involving a nonlinear and nonhomogeneous partial differential operator [Formula: see text]-Laplacian, and a multivalued term represented by Clarke’s generalized gradient. First, we apply a surjectivity result for multivalued pseudomonotone operators to examine the existence of weak solutions under mild hypotheses. Then, a comparison theorem is delivered, and a convergence result, which reveals the asymptotic behavior of solution when the parameter (heat transfer coefficient) tends to infinity, is obtained. Finally, we establish a continuous dependence result of solution to the boundary value problem on the data.


2021 ◽  
Vol 38 (1) ◽  
pp. 015001
Author(s):  
Yanan Zhao ◽  
Chunlin Wu ◽  
Qiaoli Dong ◽  
Yufei Zhao

Abstract We consider a wavelet based image reconstruction model with the ℓ p (0 < p < 1) quasi-norm regularization, which is a non-convex and non-Lipschitz minimization problem. For solving this model, Figueiredo et al (2007 IEEE Trans. Image Process. 16 2980–2991) utilized the classical majorization-minimization framework and proposed the so-called Isoft algorithm. This algorithm is computationally efficient, but whether it converges or not has not been concluded yet. In this paper, we propose a new algorithm to accelerate the Isoft algorithm, which is based on Nesterov’s extrapolation technique. Furthermore, a complete convergence analysis for the new algorithm is established. We prove that the whole sequence generated by this algorithm converges to a stationary point of the objective function. This convergence result contains the convergence of Isoft algorithm as a special case. Numerical experiments demonstrate good performance of our new algorithm.


Author(s):  
Helmut Abels

AbstractWe consider the sharp interface limit of a convective Allen–Cahn equation, which can be part of a Navier–Stokes/Allen–Cahn system, for different scalings of the mobility $$m_\varepsilon =m_0\varepsilon ^\theta $$ m ε = m 0 ε θ as $$\varepsilon \rightarrow 0$$ ε → 0 . In the case $$\theta >2$$ θ > 2 we show a (non-)convergence result in the sense that the concentrations converge to the solution of a transport equation, but they do not behave like a rescaled optimal profile in normal direction to the interface as in the case $$\theta =0$$ θ = 0 . Moreover, we show that an associated mean curvature functional does not converge to the corresponding functional for the sharp interface. Finally, we discuss the convergence in the case $$\theta =0,1$$ θ = 0 , 1 by the method of formally matched asymptotics.


2021 ◽  
Author(s):  
Shicong Cen ◽  
Chen Cheng ◽  
Yuxin Chen ◽  
Yuting Wei ◽  
Yuejie Chi

Preconditioning and Regularization Enable Faster Reinforcement Learning Natural policy gradient (NPG) methods, in conjunction with entropy regularization to encourage exploration, are among the most popular policy optimization algorithms in contemporary reinforcement learning. Despite the empirical success, the theoretical underpinnings for NPG methods remain severely limited. In “Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization”, Cen, Cheng, Chen, Wei, and Chi develop nonasymptotic convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on tabular discounted Markov decision processes. Assuming access to exact policy evaluation, the authors demonstrate that the algorithm converges linearly at an astonishing rate that is independent of the dimension of the state-action space. Moreover, the algorithm is provably stable vis-à-vis inexactness of policy evaluation. Accommodating a wide range of learning rates, this convergence result highlights the role of preconditioning and regularization in enabling fast convergence.


Author(s):  
Yanqing Yin

The aim of this paper is to investigate the spectral properties of sample covariance matrices under a more general population. We consider a class of matrices of the form [Formula: see text], where [Formula: see text] is a [Formula: see text] nonrandom matrix and [Formula: see text] is an [Formula: see text] matrix consisting of i.i.d standard complex entries. [Formula: see text] as [Formula: see text] while [Formula: see text] can be arbitrary but no smaller than [Formula: see text]. We first prove that under some mild assumptions, with probability 1, for all large [Formula: see text], there will be no eigenvalues in any closed interval contained in an open interval which is outside the supports of the limiting distributions for all sufficiently large [Formula: see text]. Then we get the strong convergence result for the extreme eigenvalues as an extension of Bai-Yin law.


Sign in / Sign up

Export Citation Format

Share Document