scholarly journals Optimization Learning: Perspective, Method, and Applications

Author(s):  
Risheng Liu

Numerous tasks at the core of statistics, learning, and vision areas are specific cases of ill-posed inverse problems. Recently, learning-based (e.g., deep) iterative methods have been empirically shown to be useful for these problems. Nevertheless, integrating learnable structures into iterations is still a laborious process, which can only be guided by intuitions or empirical insights. Moreover, there is a lack of rigorous analysis of the convergence behaviors of these reimplemented iterations, and thus the significance of such methods is a little bit vague. We move beyond these limits and propose a theoretically guaranteed optimization learning paradigm, a generic and provable paradigm for nonconvex inverse problems, and develop a series of convergent deep models. Our theoretical analysis reveals that the proposed optimization learning paradigm allows us to generate globally convergent trajectories for learning-based iterative methods. Thanks to the superiority of our framework, we achieve state-of-the-art performance on different real applications.


1983 ◽  
Vol 45 (5) ◽  
pp. 1237-1245 ◽  
Author(s):  
O. M. Alifanov
Keyword(s):  


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Changyong Li ◽  
Yongxian Fan ◽  
Xiaodong Cai

Abstract Background With the development of deep learning (DL), more and more methods based on deep learning are proposed and achieve state-of-the-art performance in biomedical image segmentation. However, these methods are usually complex and require the support of powerful computing resources. According to the actual situation, it is impractical that we use huge computing resources in clinical situations. Thus, it is significant to develop accurate DL based biomedical image segmentation methods which depend on resources-constraint computing. Results A lightweight and multiscale network called PyConvU-Net is proposed to potentially work with low-resources computing. Through strictly controlled experiments, PyConvU-Net predictions have a good performance on three biomedical image segmentation tasks with the fewest parameters. Conclusions Our experimental results preliminarily demonstrate the potential of proposed PyConvU-Net in biomedical image segmentation with resources-constraint computing.



Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.



2019 ◽  
Vol 9 (13) ◽  
pp. 2684 ◽  
Author(s):  
Hongyang Li ◽  
Lizhuang Liu ◽  
Zhenqi Han ◽  
Dan Zhao

Peeling fibre is an indispensable process in the production of preserved Szechuan pickle, the accuracy of which can significantly influence the quality of the products, and thus the contour method of fibre detection, as a core algorithm of the automatic peeling device, is studied. The fibre contour is a kind of non-salient contour, characterized by big intra-class differences and small inter-class differences, meaning that the feature of the contour is not discriminative. The method called dilated-holistically-nested edge detection (Dilated-HED) is proposed to detect the fibre contour, which is built based on the HED network and dilated convolution. The experimental results for our dataset show that the Pixel Accuracy (PA) is 99.52% and the Mean Intersection over Union (MIoU) is 49.99%, achieving state-of-the-art performance.





2015 ◽  
Vol 15 (3) ◽  
pp. 373-389
Author(s):  
Oleg Matysik ◽  
Petr Zabreiko

AbstractThe paper deals with iterative methods for solving linear operator equations ${x = Bx + f}$ and ${Ax = f}$ with self-adjoint operators in Hilbert space X in the critical case when ${\rho (B) = 1}$ and ${0 \in \operatorname{Sp} A}$. The results obtained are based on a theorem by M. A. Krasnosel'skii on the convergence of the successive approximations, their modifications and refinements.



Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 674
Author(s):  
Kushani De De Silva ◽  
Carlo Cafaro ◽  
Adom Giffin

Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking the subsequent analytic derivative of the estimated system. The underlying system is often estimated by fitting or smoothing the data using other techniques. Taking the subsequent analytic derivative of an estimated function can be ill-posed. This becomes worse as the noise in the system increases. As a result, the uncertainty generated in the gradient estimate increases. In this paper, a theoretical framework for a method to estimate the gradient profile of discrete noisy data is presented. The method was developed within a Bayesian framework. Comprehensive numerical experiments were conducted on synthetic data at different levels of noise. The accuracy of the proposed method was quantified. Our findings suggest that the proposed gradient profile estimation method outperforms the state-of-the-art methods.



Author(s):  
C. W. Groetsch ◽  
Martin Hanke

Abstract A simple numerical method for some one-dimensional inverse problems of model identification type arising in nonlinear heat transfer is discussed. The essence of the method is to express the nonlinearity in terms of an integro-differential operator, the values of which are approximated by a linear spline technique. The inverse problems are mildly ill-posed and therefore call for regularization when data errors are present. A general technique for stabilization of unbounded operators may be applied to regularize the process and a specific regularization technique is illustrated on a model problem.



Author(s):  
Mingyong Zhou

Background: Complex inverse problems such as Radar Imaging and CT/EIT imaging are well investigated in mathematical algorithms with various regularization methods. However it is difficult to obtain stable inverse solutions with fast convergence and high accuracy at the same time due to the ill-posed property and non-linear property. Objective: In this paper, we propose a hierarchical and multi-resolution scalable method from both algorithm perspective and hardware perspective to achieve fast and accurate solu-tions for inverse problems by taking radar and EIT imaging as examples. Method: We present an extension of discussion on neuromorphic computing as brain-inspired computing method and the learning/training algorithm to design a series of problem specific AI “brains” (with different memristive values) to solve a general complex ill-posed inverse problems that are traditionally solved by mathematical regular operators. We design a hierarchical and multi-resolution scalable method and an algorithm framework to train AI deep learning neuron network and map into the memristive circuit so that the memristive val-ues are optimally obtained. We propose FPGA as an emulation implementation for neuro-morphic circuit as well. Result: We compared the methodology between our approach and traditional regulariza-tion method. In particular we use Electrical Impedance Tomography (EIT) and Radar imaging as typical examples to compare how to design an AI deep learning neuron network architec-tures to solve inverse problems. Conclusion: With EIT imaging as a typical example, we show that any moderate complex inverse problem, as long as it can be described as combinational problem, AI deep learning neuron network is a practical alternative approach to try to solve the inverse problems with any given expected resolution accuracy, as long as the neuron network width is large enough and computational power is strong enough for all combination samples training purpose.



2021 ◽  
pp. 1-21
Author(s):  
Andrei C. Apostol ◽  
Maarten C. Stol ◽  
Patrick Forré

We propose a novel pruning method which uses the oscillations around 0, i.e. sign flips, that a weight has undergone during training in order to determine its saliency. Our method can perform pruning before the network has converged, requires little tuning effort due to having good default values for its hyperparameters, and can directly target the level of sparsity desired by the user. Our experiments, performed on a variety of object classification architectures, show that it is competitive with existing methods and achieves state-of-the-art performance for levels of sparsity of 99.6 % and above for 2 out of 3 of the architectures tested. Moreover, we demonstrate that our method is compatible with quantization, another model compression technique. For reproducibility, we release our code at https://github.com/AndreiXYZ/flipout.



Sign in / Sign up

Export Citation Format

Share Document