smoothness assumption
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Roberto Andreani ◽  
Walter Gómez ◽  
Gabriel Haeser ◽  
Leonardo M. Mito ◽  
Alberto Ramos

Sequential optimality conditions play a major role in proving stronger global convergence results of numerical algorithms for nonlinear programming. Several extensions are described in conic contexts, in which many open questions have arisen. In this paper, we present new sequential optimality conditions in the context of a general nonlinear conic framework, which explains and improves several known results for specific cases, such as semidefinite programming, second-order cone programming, and nonlinear programming. In particular, we show that feasible limit points of sequences generated by the augmented Lagrangian method satisfy the so-called approximate gradient projection optimality condition and, under an additional smoothness assumption, the so-called complementary approximate Karush–Kuhn–Tucker condition. The first result was unknown even for nonlinear programming, and the second one was unknown, for instance, for semidefinite programming.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257455
Author(s):  
Simon N. Wood ◽  
Ernst C. Wit

Detail is a double edged sword in epidemiological modelling. The inclusion of mechanistic detail in models of highly complex systems has the potential to increase realism, but it also increases the number of modelling assumptions, which become harder to check as their possible interactions multiply. In a major study of the Covid-19 epidemic in England, Knock et al. (2020) fit an age structured SEIR model with added health service compartments to data on deaths, hospitalization and test results from Covid-19 in seven English regions for the period March to December 2020. The simplest version of the model has 684 states per region. One main conclusion is that only full lockdowns brought the pathogen reproduction number, R, below one, with R ≫ 1 in all regions on the eve of March 2020 lockdown. We critically evaluate the Knock et al. epidemiological model, and the semi-causal conclusions made using it, based on an independent reimplementation of the model designed to allow relaxation of some of its strong assumptions. In particular, Knock et al. model the effect on transmission of both non-pharmaceutical interventions and other effects, such as weather, using a piecewise linear function, b(t), with 12 breakpoints at selected government announcement or intervention dates. We replace this representation by a smoothing spline with time varying smoothness, thereby allowing the form of b(t) to be substantially more data driven, and we check that the corresponding smoothness assumption is not driving our results. We also reset the mean incubation time and time from first symptoms to hospitalisation, used in the model, to values implied by the papers cited by Knock et al. as the source of these quantities. We conclude that there is no sound basis for using the Knock et al. model and their analysis to make counterfactual statements about the number of deaths that would have occurred with different lockdown timings. However, if fits of this epidemiological model structure are viewed as a reasonable basis for inference about the time course of incidence and R, then without very strong modelling assumptions, the pathogen reproduction number was probably below one, and incidence in substantial decline, some days before either of the first two English national lockdowns. This result coincides with that obtained by more direct attempts to reconstruct incidence. Of course it does not imply that lockdowns had no effect, but it does suggest that other non-pharmaceutical interventions (NPIs) may have been much more effective than Knock et al. imply, and that full lockdowns were probably not the cause of R dropping below one.


Risks ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 23
Author(s):  
Annalena Mickel ◽  
Andreas Neuenkirch

Inspired by the article Weak Convergence Rate of a Time-Discrete Scheme for the Heston Stochastic Volatility Model, Chao Zheng, SIAM Journal on Numerical Analysis 2017, 55:3, 1243–1263, we studied the weak error of discretization schemes for the Heston model, which are based on exact simulation of the underlying volatility process. Both for an Euler- and a trapezoidal-type scheme for the log-asset price, we established weak order one for smooth payoffs without any assumptions on the Feller index of the volatility process. In our analysis, we also observed the usual trade off between the smoothness assumption on the payoff and the restriction on the Feller index. Moreover, we provided error expansions, which could be used to construct second order schemes via extrapolation. In this paper, we illustrate our theoretical findings by several numerical examples.


Author(s):  
M. Höllmann ◽  
M. Mehltretter ◽  
C. Heipke

Abstract. In the present work, an uncertainty-driven geometry-based regularisation for the task of dense stereo matching is presented. The objective of the regularisation is the reduction of ambiguities in the depth reconstruction process, which exist due to the ill-posed nature of this task. Based on cost and uncertainty information computed beforehand, pixels are selected, whose depth information can be determined correctly with a high probability. This depth information assumed to be of high confidence is initially used to construct a triangle mesh, which is interpreted as surface approximation of the imaged scene and allows to propagate the confident depth information of the triangle vertices within local neighbourhoods. The proposed method further computes confidence scores for propagated depth estimates, which are used to fuse this depth information with the previously computed cost information, introducing a regularisation into the data term of global optimisation methods. Furthermore, based on the propagated depth information the local smoothness assumption of global optimisation methods is adjusted. Instead of fronto-parallel planes, the method presumes planes, which are parallel to the propagated depth information. The performance of the proposed regularisation approach is evaluated in combination with a global optimisation method. For a quantitative and qualitative evaluation two commonly employed and well-established stereo datasets are used. The proposed method shows significant improvements in accuracy on both datasets and for two different cost computation methods. Especially in unstructured areas, artefacts in the disparity maps are reduced.


2020 ◽  
Vol 34 (05) ◽  
pp. 7618-7625
Author(s):  
Yong Dai ◽  
Jian Liu ◽  
Xiancong Ren ◽  
Zenglin Xu

Multi-source unsupervised domain adaptation (MS-UDA) for sentiment analysis (SA) aims to leverage useful information in multiple source domains to help do SA in an unlabeled target domain that has no supervised information. Existing algorithms of MS-UDA either only exploit the shared features, i.e., the domain-invariant information, or based on some weak assumption in NLP, e.g., smoothness assumption. To avoid these problems, we propose two transfer learning frameworks based on the multi-source domain adaptation methodology for SA by combining the source hypotheses to derive a good target hypothesis. The key feature of the first framework is a novel Weighting Scheme based Unsupervised Domain Adaptation framework ((WS-UDA), which combine the source classifiers to acquire pseudo labels for target instances directly. While the second framework is a Two-Stage Training based Unsupervised Domain Adaptation framework (2ST-UDA), which further exploits these pseudo labels to train a target private extractor. Importantly, the weights assigned to each source classifier are based on the relations between target instances and source domains, which measured by a discriminator through the adversarial training. Furthermore, through the same discriminator, we also fulfill the separation of shared features and private features.Experimental results on two SA datasets demonstrate the promising performance of our frameworks, which outperforms unsupervised state-of-the-art competitors.


2020 ◽  
Vol 34 (04) ◽  
pp. 6534-6541
Author(s):  
Yuanyuan Xu ◽  
Jun Wang ◽  
Jinmao Wei

In multi-label learning, instances have a large number of noisy and irrelevant features, and each instance is associated with a set of class labels wherein label information is generally incomplete. These missing labels possess two sides like a coin; people cannot predict whether their provided information for feature selection is favorable (relevant) or not (irrelevant) during tossing. Existing approaches either superficially consider the missing labels as negative or indiscreetly impute them with some predicted values, which may either overestimate unobserved labels or introduce new noises in selecting discriminative features. To avoid the pitfall of missing labels, a novel unified framework of selecting discriminative features and modeling incomplete label matrix is proposed from a generative point of view in this paper. Concretely, we relax Smoothness Assumption to infer the label observability, which can reveal the positions of unobserved labels, and employ the spike-and-slab prior to perform feature selection by excluding unobserved labels. Using a data-augmentation strategy leads to full local conjugacy in our model, facilitating simple and efficient Expectation Maximization (EM) algorithm for inference. Quantitative and qualitative experimental results demonstrate the superiority of the proposed approach under various evaluation metrics.


2020 ◽  
Vol 224 ◽  
pp. 01027
Author(s):  
P. V. Belyakov ◽  
M. B. Nikiforov ◽  
E. R. Muratov ◽  
O. V. Melnik

Optical flow computation is one of the most important tasks in computer vision. The article deals with a modification of the variational method of the optical flow computation, according to its application in stereo vision. Such approaches are traditionally based on a brightness constancy assumption and a gradient constancy assumption during pixels motion. Smoothness assumption also restricts motion discontinuities, i.e. the smoothness of the vector field of pixel velocity is assumed. It is proposed to extend the functional of the optical flow computation in a similar way by adding a priori known stereo cameras extrinsic parameters and minimize such jointed model of optical flow computation. The article presents a partial differential equations framework in image processing and numerical scheme for its implementation. Performed experimental evaluation demonstrates that the proposed method gives smaller errors than traditional methods of optical flow computation.


Author(s):  
B. Ruf ◽  
T. Pollok ◽  
M. Weinmann

<p><strong>Abstract.</strong> Online augmentation of an oblique aerial image sequence with structural information is an essential aspect in the process of 3D scene interpretation and analysis. One key aspect in this is the efficient dense image matching and depth estimation. Here, the Semi-Global Matching (SGM) approach has proven to be one of the most widely used algorithms for efficient depth estimation, providing a good trade-off between accuracy and computational complexity. However, SGM only models a first-order smoothness assumption, thus favoring fronto-parallel surfaces. In this work, we present a hierarchical algorithm that allows for efficient depth and normal map estimation together with confidence measures for each estimate. Our algorithm relies on a plane-sweep multi-image matching followed by an extended SGM optimization that allows to incorporate local surface orientations, thus achieving more consistent and accurate estimates in areasmade up of slanted surfaces, inherent to oblique aerial imagery. We evaluate numerous configurations of our algorithm on two different datasets using an absolute and relative accuracy measure. In our evaluation, we show that the results of our approach are comparable to the ones achieved by refined Structure-from-Motion (SfM) pipelines, such as COLMAP, which are designed for offline processing. In contrast, however, our approach only considers a confined image bundle of an input sequence, thus allowing to perform an online and incremental computation at 1Hz&amp;ndash;2Hz.</p>


2019 ◽  
Vol 17 (05) ◽  
pp. 773-818 ◽  
Author(s):  
Yi Xu ◽  
Qihang Lin ◽  
Tianbao Yang

In this paper, a new theory is developed for first-order stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions. In particular, if the objective function [Formula: see text] in the [Formula: see text]-sublevel set grows as fast as [Formula: see text], where [Formula: see text] represents the closest optimal solution to [Formula: see text] and [Formula: see text] quantifies the local growth rate, the iteration complexity of first-order stochastic optimization for achieving an [Formula: see text]-optimal solution can be [Formula: see text], which is optimal at most up to a logarithmic factor. To achieve the faster global convergence, we develop two different accelerated stochastic subgradient methods by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Besides the theoretical improvements, this work also includes new contributions toward making the proposed algorithms practical: (i) we present practical variants of accelerated stochastic subgradient methods that can run without the knowledge of multiplicative growth constant and even the growth rate [Formula: see text]; (ii) we consider a broad family of problems in machine learning to demonstrate that the proposed algorithms enjoy faster convergence than traditional stochastic subgradient method. We also characterize the complexity of the proposed algorithms for ensuring the gradient is small without the smoothness assumption.


Sign in / Sign up

Export Citation Format

Share Document