yosida regularization
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 7)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Qiyuan Wei ◽  
Liwei Zhang

<p style='text-indent:20px;'>An accelerated differential equation system with Yosida regularization and its numerical discretized scheme, for solving solutions to a generalized equation, are investigated. Given a maximal monotone operator <inline-formula><tex-math id="M1">\begin{document}$ T $\end{document}</tex-math></inline-formula> on a Hilbert space, this paper will study the asymptotic behavior of the solution trajectories of the differential equation</p><p style='text-indent:20px;'><disp-formula> <label/> <tex-math id="FE1"> \begin{document}$ \begin{equation} \dot{x}(t)+T_{\lambda(t)}(x(t)-\alpha(t)T_{\lambda(t)}(x(t))) = 0,\quad t\geq t_0\geq 0, \end{equation} $\end{document} </tex-math></disp-formula></p><p style='text-indent:20px;'>to the solution set <inline-formula><tex-math id="M2">\begin{document}$ T^{-1}(0) $\end{document}</tex-math></inline-formula> of a generalized equation <inline-formula><tex-math id="M3">\begin{document}$ 0 \in T(x) $\end{document}</tex-math></inline-formula>. With smart choices of parameters <inline-formula><tex-math id="M4">\begin{document}$ \lambda(t) $\end{document}</tex-math></inline-formula> and <inline-formula><tex-math id="M5">\begin{document}$ \alpha(t) $\end{document}</tex-math></inline-formula>, we prove the weak convergence of the trajectory to some point of <inline-formula><tex-math id="M6">\begin{document}$ T^{-1}(0) $\end{document}</tex-math></inline-formula> with <inline-formula><tex-math id="M7">\begin{document}$ \|\dot{x}(t)\|\leq {\rm O}(1/t) $\end{document}</tex-math></inline-formula> as <inline-formula><tex-math id="M8">\begin{document}$ t\rightarrow +\infty $\end{document}</tex-math></inline-formula>. Interestingly, under the upper Lipshitzian condition, strong convergence and faster convergence can be obtained. For numerical discretization of the system, the uniform convergence of the Euler approximate trajectory <inline-formula><tex-math id="M9">\begin{document}$ x^{h}(t) \rightarrow x(t) $\end{document}</tex-math></inline-formula> on interval <inline-formula><tex-math id="M10">\begin{document}$ [0,+\infty) $\end{document}</tex-math></inline-formula> is demonstrated when the step size <inline-formula><tex-math id="M11">\begin{document}$ h \rightarrow 0 $\end{document}</tex-math></inline-formula>.</p>


Author(s):  
Lei Wang ◽  
Hui Huang

Image reconstruction in fluorescence molecular tomography involves seeking stable and meaningful solutions via the inversion of a highly under-determined and severely ill-posed linear mapping. An attractive scheme consists of minimizing a convex objective function that includes a quadratic error term added to a convex and nonsmooth sparsity-promoting regularizer. Choosing [Formula: see text]-norm as a particular case of a vast class of nonsmooth convex regularizers, our paper proposes a low per-iteration complexity gradient-based first-order optimization algorithm for the [Formula: see text]-regularized least squares inverse problem of image reconstruction. Our algorithm relies on a combination of two ideas applied to the nonsmooth convex objective function: Moreau–Yosida regularization and inertial dynamics-based acceleration. We also incorporate into our algorithm a gradient-based adaptive restart strategy to further enhance the practical performance. Extensive numerical experiments illustrate that in several representative test cases (covering different depths of small fluorescent inclusions, different noise levels and different separation distances between small fluorescent inclusions), our algorithm can significantly outperform three state-of-the-art algorithms in terms of CPU time taken by reconstruction, despite almost the same reconstructed images produced by each of the four algorithms.


2020 ◽  
Vol 8 (2) ◽  
pp. 403-413
Author(s):  
Yaping Hu ◽  
Liying Liu ◽  
Yujie Wang

This paper presents a Wei-Yao-Liu conjugate gradient algorithm for nonsmooth convex optimization problem. The proposed algorithm makes use of approximate function and gradient values of the Moreau-Yosida regularization function instead of the corresponding exact values.  Under suitable conditions, the global convergence property could be established for the proposed conjugate gradient  method. Finally, some numerical results are reported to show the efficiency of our algorithm.


2020 ◽  
Vol 40 (1) ◽  
pp. 117-137
Author(s):  
R. Israel Ortega-Gutiérrez ◽  
H. Cruz-Suárez

This paper addresses a class of sequential optimization problems known as Markov decision processes. These kinds of processes are considered on Euclidean state and action spaces with the total expected discounted cost as the objective function. The main goal of the paper is to provide conditions to guarantee an adequate Moreau-Yosida regularization for Markov decision processes (named the original process). In this way, a new Markov decision process that conforms to the Markov control model of the original process except for the cost function induced via the Moreau-Yosida regularization is established. Compared to the original process, this new discounted Markov decision process has richer properties, such as the differentiability of its optimal value function, strictly convexity of the value function, uniqueness of optimal policy, and the optimal value function and the optimal policy of both processes, are the same. To complement the theory presented, an example is provided.


2020 ◽  
Vol 26 ◽  
pp. 34 ◽  
Author(s):  
Irwin Yousept

We analyze a class of hyperbolic Maxwell variational inequalities of the second kind. By means of a local boundedness assumption on the subdifferential of the underlying nonlinearity, we prove a well-posedness result, where the main tools for the proof are the semigroup theory for Maxwell’s equations, the Yosida regularization and the subdifferential calculus. The second part of the paper focuses on a more general case omitting the local boundedness assumption. In this case, taking into account more regular initial data and test functions, we are able to prove a weaker existence result through the use of the minimal section operator associated with the Nemytskii operator of the governing subdifferential. Eventually, we transfer the developed well-posedness results to the case involving Faraday’s law, which in particular allows us to improve the regularity property of the electric field in the weak existence result.


2019 ◽  
Vol 29 (06) ◽  
pp. 1950002 ◽  
Author(s):  
Qiang Wu ◽  
Yu Zhang ◽  
Ju Liu ◽  
Jiande Sun ◽  
Andrzej Cichocki ◽  
...  

Event-related potentials (ERPs) especially P300 are popular effective features for brain–computer interface (BCI) systems based on electroencephalography (EEG). Traditional ERP-based BCI systems may perform poorly for small training samples, i.e. the undersampling problem. In this study, the ERP classification problem was investigated, in particular, the ERP classification in the high-dimensional setting with the number of features larger than the number of samples was studied. A flexible group sparse discriminative analysis algorithm based on Moreau–Yosida regularization was proposed for alleviating the undersampling problem. An optimization problem with the group sparse criterion was presented, and the optimal solution was proposed by using the regularized optimal scoring method. During the alternating iteration procedure, the feature selection and classification were performed simultaneously. Two P300-based BCI datasets were used to evaluate our proposed new method and compare it with existing standard methods. The experimental results indicated that the features extracted via our proposed method are efficient and provide an overall better P300 classification accuracy compared with several state-of-the-art methods.


2018 ◽  
Vol 39 (3) ◽  
pp. 1276-1295 ◽  
Author(s):  
L Adam ◽  
M Hintermüller ◽  
T M Surowiec

Abstract An efficient, function-space-based second-order method for the $H^1$-projection onto the Gibbs simplex is presented. The method makes use of the theory of semismooth Newton methods in function spaces as well as Moreau–Yosida regularization and techniques from parametric optimization. A path-following technique is considered for the regularization parameter updates. A rigorous first- and second-order sensitivity analysis of the value function for the regularized problem is provided to justify the update scheme. The viability of the algorithm is then demonstrated for two applications found in the literature: binary image inpainting and labeled data classification. In both cases, the algorithm exhibits mesh-independent behavior.


Sign in / Sign up

Export Citation Format

Share Document