piecewise linear function
Recently Published Documents


TOTAL DOCUMENTS

125
(FIVE YEARS 28)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Danyu Lin ◽  
Donglin Zeng ◽  
Yu Gu ◽  
Thomas Fleming ◽  
Phillip Krause

Decision-making about booster dosing for COVID-19 vaccine recipients hinges on reliable methods for evaluating the longevity of vaccine protection. We show that modeling of protection as a piecewise linear function of time since vaccination for the log hazard ratio of the vaccine effect provides more reliable estimates of vaccine effectiveness at the end of an observation period and also more reliably detects plateaus in protective effectiveness as compared with the traditional method of estimating a constant vaccine effect over each time period. This approach will be useful for analyzing data pertaining to COVID-19 vaccines and other vaccines where rapid and reliable understanding of vaccine effectiveness over time is desired.


Author(s):  
John Alasdair Warwicker ◽  
Steffen Rebennack

The problem of fitting continuous piecewise linear (PWL) functions to discrete data has applications in pattern recognition and engineering, amongst many other fields. To find an optimal PWL function, the positioning of the breakpoints connecting adjacent linear segments must not be constrained and should be allowed to be placed freely. Although the univariate PWL fitting problem has often been approached from a global optimisation perspective, recently, two mixed-integer linear programming approaches have been presented that solve for optimal PWL functions. In this paper, we compare the two approaches: the first was presented by Rebennack and Krasko [Rebennack S, Krasko V (2020) Piecewise linear function fitting via mixed-integer linear programming. INFORMS J. Comput. 32(2):507–530] and the second by Kong and Maravelias [Kong L, Maravelias CT (2020) On the derivation of continuous piecewise linear approximating functions. INFORMS J. Comput. 32(3):531–546]. Both formulations are similar in that they use binary variables and logical implications modelled by big-[Formula: see text] constructs to ensure the continuity of the PWL function, yet the former model uses fewer binary variables. We present experimental results comparing the time taken to find optimal PWL functions with differing numbers of breakpoints across 10 data sets for three different objective functions. Although neither of the two formulations is superior on all data sets, the presented computational results suggest that the formulation presented by Rebennack and Krasko is faster. This might be explained by the fact that it contains fewer complicating binary variables and sparser constraints. Summary of Contribution: This paper presents a comparison of the mixed-integer linear programming models presented in two recent studies published in the INFORMS Journal on Computing. Because of the similarity of the formulations of the two models, it is not clear which one is preferable. We present a detailed comparison of the two formulations, including a series of comparative experimental results across 10 data sets that appeared across both papers. We hope that our results will allow readers to take an objective view as to which implementation they should use.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3205
Author(s):  
Robin Dee ◽  
Armin Fügenschuh ◽  
George Kaimakamis

We describe the problem of re-balancing a number of units distributed over a geographic area. Each unit consists of a number of components. A value between 0 and 1 describes the current rating of each component. By a piecewise linear function, this value is converted into a nominal status assessment. The lowest of the statuses determines the efficiency of a unit, and the highest status its cost. An unbalanced unit has a gap between these two. To re-balance the units, components can be transferred. The goal is to maximize the efficiency of all units. On a secondary level, the cost for the re-balancing should be minimal. We present a mixed-integer nonlinear programming formulation for this problem, which describes the potential movement of components as a multi-commodity flow. The piecewise linear functions needed to obtain the status values are reformulated using inequalities and binary variables. This results in a mixed-integer linear program, and numerical standard solvers are able to compute proven optimal solutions for instances with up to 100 units. We present numerical solutions for a set of open test instances and a bi-criteria objective function, and discuss the trade-off between cost and efficiency.


2021 ◽  
Vol 2131 (3) ◽  
pp. 032037
Author(s):  
I N Cherednichenko

Abstract We propose a new type of neuron based on the use of Fourier transform properties. This new type of neuron, called Fourier neuron (F-neuron), simplifies solving of a range of problems belonging to the class of problems of creating self-organizing networks using teacherless learning. The application of such F-neuron improves the quality and efficiency of automatic clustering of objects. We described the basic principles and approaches that allow to consider the properties vector as a parametric piecewise linear function, which provides the possibility to switch to Fourier-images operation both for input objects and for learning weights. The reasons for transferring information processing to Fourier space are justified, automatic orthogonalization and ranking of the Fourier image of the feature vector is explained. The advantages of the statistical approach to neuron training and construction of the refined neuron state function based on the parameters of the normal distribution are analyzed. We describe the procedure of training and pre-training the F-neuron that uses a statistical model based on the use of parameters of a normal distribution to calculate the confidence interval. We described an algorithm for recalculating normal distribution parameters when a new sample is added to the cluster. We reviewed some results of F-neuron technology and compared it with a traditional perceptron. A list of references and citations to the author’s previous works are given below.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257455
Author(s):  
Simon N. Wood ◽  
Ernst C. Wit

Detail is a double edged sword in epidemiological modelling. The inclusion of mechanistic detail in models of highly complex systems has the potential to increase realism, but it also increases the number of modelling assumptions, which become harder to check as their possible interactions multiply. In a major study of the Covid-19 epidemic in England, Knock et al. (2020) fit an age structured SEIR model with added health service compartments to data on deaths, hospitalization and test results from Covid-19 in seven English regions for the period March to December 2020. The simplest version of the model has 684 states per region. One main conclusion is that only full lockdowns brought the pathogen reproduction number, R, below one, with R ≫ 1 in all regions on the eve of March 2020 lockdown. We critically evaluate the Knock et al. epidemiological model, and the semi-causal conclusions made using it, based on an independent reimplementation of the model designed to allow relaxation of some of its strong assumptions. In particular, Knock et al. model the effect on transmission of both non-pharmaceutical interventions and other effects, such as weather, using a piecewise linear function, b(t), with 12 breakpoints at selected government announcement or intervention dates. We replace this representation by a smoothing spline with time varying smoothness, thereby allowing the form of b(t) to be substantially more data driven, and we check that the corresponding smoothness assumption is not driving our results. We also reset the mean incubation time and time from first symptoms to hospitalisation, used in the model, to values implied by the papers cited by Knock et al. as the source of these quantities. We conclude that there is no sound basis for using the Knock et al. model and their analysis to make counterfactual statements about the number of deaths that would have occurred with different lockdown timings. However, if fits of this epidemiological model structure are viewed as a reasonable basis for inference about the time course of incidence and R, then without very strong modelling assumptions, the pathogen reproduction number was probably below one, and incidence in substantial decline, some days before either of the first two English national lockdowns. This result coincides with that obtained by more direct attempts to reconstruct incidence. Of course it does not imply that lockdowns had no effect, but it does suggest that other non-pharmaceutical interventions (NPIs) may have been much more effective than Knock et al. imply, and that full lockdowns were probably not the cause of R dropping below one.


2021 ◽  
pp. 1-15
Author(s):  
Yujie Tao ◽  
Chunfeng Suo ◽  
Guijun Wang

Piecewise linear function (PLF) is not only a generalization of univariate segmented linear function in multivariate case, but also an important bridge to study the approximation of continuous function by Mamdani and Takagi-Sugeno fuzzy systems. In this paper, the definitions of the PLF and subdivision are introduced in the hyperplane, the analytic expression of PLF is given by using matrix determinant, and the concept of approximation factor is first proposed by using m-mesh subdivision. Secondly, the vertex coordinates and their changing rules of the n-dimensional small polyhedron are found by dividing a three-dimensional cube, and the algebraic cofactor and matrix norm of corresponding determinants of piecewise linear functions are given. Finally, according to the method of solving algebraic cofactors and matrix norms, it is proved that the approximation factor has nothing to do with the number of subdivisions, but the approximation accuracy has something to do with the number of subdivisions. Furthermore, the process of a specific binary piecewise linear function approaching a continuous function according to infinite norm in two dimensions space is realized by a practical example, and the validity of PLFs to approximate a continuous function is verified by t-hypothesis test in Statistics.


2021 ◽  
Vol 29 (2) ◽  
pp. 103-115
Author(s):  
Takashi Mitsuishi

Summary. IF-THEN rules in fuzzy inference is composed of multiple fuzzy sets (membership functions). IF-THEN rules can therefore be considered as a pair of membership functions [7]. The evaluation function of fuzzy control is composite function with fuzzy approximate reasoning and is functional on the set of membership functions. We obtained continuity of the evaluation function and compactness of the set of membership functions [12]. Therefore, we proved the existence of pair of membership functions, which maximizes (minimizes) evaluation function and is considered IF-THEN rules, in the set of membership functions by using extreme value theorem. The set of membership functions (fuzzy sets) is defined in this article to verifier our proofs before by Mizar [9], [10], [4]. Membership functions composed of triangle function, piecewise linear function and Gaussian function used in practice are formalized using existing functions. On the other hand, not only curve membership functions mentioned above but also membership functions composed of straight lines (piecewise linear function) like triangular and trapezoidal functions are formalized. Moreover, different from the definition in [3] formalizations of triangular and trapezoidal function composed of two straight lines, minimum function and maximum functions are proposed. We prove, using the Mizar [2], [1] formalism, some properties of membership functions such as continuity and periodicity [13], [8].


Author(s):  
P. Stetsyuk ◽  
М. Stetsyuk ◽  
D. Bragin ◽  
N. Мolodyk

The paper is devoted to the description of a new approach to the construction of algorithms for solving linear programming problems (LP-problems), in which the number of constraints is much greater than the number of variables. It is based on the use of a modification of the r-algorithm to solve the problem of minimizing a nonsmooth function, which is equivalent to LP problem. The advantages of the approach are demonstrated on the linear robust optimization problem and the robust parameters estimation problem using the least moduli method. The developed octave programs are designed to solve LP problems with a very large number of constraints, for which the use of standard software from linear programming is either impossible or impractical, because it requires significant computing resources. The material of the paper is presented in three sections. In the first section for the problem of minimizing a convex function we describe a modification of the r-algorithm with a constant coefficient of space dilation in the direction of the difference of two successive subgradients and an adaptive method for step size adjustment in the direction of the antisubgradient in the transformed space of variables. The software implementation of this modification is presented in the form of Octave function ralgb5a, which allows to find or approximation of the minimum point of a convex function, or approximation of the maximum point of the concave function. The code of the ralgb5a function is given with a brief description of its input and output parameters. In the second section, a method for solving the LP problem is presented using a nonsmooth penalty function in the form of maximum function and the construction of an auxiliary problem of unconstrained minimization of a convex piecewise linear function. The choice of the finite penalty coefficient ensures equivalence between the LP-problem and the auxiliary problem, and the latter is solved using the ralgb5a program. The results of computational experiments in GNU Octave for solving test LP-problems with the number of constraints from two hundred thousand to fifty million and the number of variables from ten to fifty are presented. The third section presents least moduli method that is robust to abnormal observations or "outliers". The method uses the problem of unconstrained minimization of a convex piecewise linear function, and is solved using the ralgb5a program. The results of computational experiments in GNU Octave for solving test problems with a large number of observations (from two hundred thousand to five million) and a small number of unknown parameters (from ten to one hundred) are presented. They demonstrate the superiority of the developed programs over well-known linear programming software such as the GLPK package. Keywords: robust optimization, linear programming problem, nonsmooth penalty function, r-algorithm, least modulus method, GNU Octave.


Author(s):  
Noam Goldberg ◽  
Steffen Rebennack ◽  
Youngdae Kim ◽  
Vitaliy Krasko ◽  
Sven Leyffer

AbstractWe consider a nonconvex mixed-integer nonlinear programming (MINLP) model proposed by Goldberg et al. (Comput Optim Appl 58:523–541, 2014. 10.1007/s10589-014-9647-y) for piecewise linear function fitting. We show that this MINLP model is incomplete and can result in a piecewise linear curve that is not the graph of a function, because it misses a set of necessary constraints. We provide two counterexamples to illustrate this effect, and propose three alternative models that correct this behavior. We investigate the theoretical relationship between these models and evaluate their computational performance.


2021 ◽  
Author(s):  
Alexandra Urgilez Vinueza ◽  
Alexander Handwerger ◽  
Mark Bakker ◽  
Thom Bogaard

<p>Regional-scale landslide deformation can be measured using satellite-based synthetic aperture radar interferometry (InSAR). Our study focuses on the quantification of displacements of slow-moving landslides that impact a hydropower dam and reservoir in the tropical Ecuadorian Andes. We constructed ground surface deformation time series using data from the Copernicus Sentinel-1 A/B satellites between 2016 and 2020. We developed a new approach to automatically detect the onset of accelerations and/or decelerations within each active landslide. Our approach approximates the movement of a pixel as a piecewise linear function. Multiple linear segments are fitted to the cumulative deformation time series of each pixel. Each linear segment represents a constant movement. The point where one linear segment is connected to another linear segment represents the time when the pixel’s rate of movement has changed from one value to another value and is referred to as a breakpoint. As such, the breakpoints represent moments of acceleration or deceleration. Three criteria are used to determine the number of breakpoints: the timing and uncertainty of the breakpoints, the confidence intervals of the fitted segments’ slopes, and the Akaike Information Criterion (AIC). The suitable number of breakpoints for each pixel (i.e., the number of accelerations or decelerations) is determined by finding the largest number of breakpoints that complies with the three listed criteria. The application of this approach to landslides results in a wealth of information on the surface displacement of a slope and an objective way to identify changes in displacement rates. The displacement rates, their spatial variation, and the timing of acceleration and deceleration can further be used to study the physical behavior of a slow-moving slope or for (regional) hazard assessment linking the onset of change in displacement rate to causal and triggering factors.</p>


Sign in / Sign up

Export Citation Format

Share Document