Optimal Solution of Nonlinear Equations
Latest Publications


TOTAL DOCUMENTS

5
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780195106909, 9780197561010

Author(s):  
Krzysztof A. Sikorski

In this chapter we consider the approximation of fixed points of noncontractive functions with respect to the absolute error criterion. In this case the functions may have multiple and/or whole manifolds of fixed points. We analyze methods based on sequential function evaluations as information. The simple iteration usually does not converge in this case, and the problem becomes much more difficult to solve. We prove that even in the two-dimensional case the problem has infinite worst case complexity. This means that no methods exist that solve the problem with arbitrarily small error tolerance for some “bad” functions. In the univariate case the problem is solvable, and a bisection envelope method is optimal. These results are in contrast with the solution under the residual error criterion. The problem then becomes solvable, although with exponential complexity, as outlined in the annotations. Therefore, simplicial and/or homotopy continuation and all methods based on function evaluations exhibit exponential worst case cost for solving the problem in the residual sense. These results indicate the need of average case analysis, since for many test functions the existing algorithms computed ε-approximations with polynomial in 1/ε cost.


Author(s):  
Krzysztof A. Sikorski

In this chapter we address the problem of computing topological degree of Lipschitz functions. From the knowledge of the topological degree one may ascertain whether there exists a zero of a function inside the domain, a knowledge that is practically and theoretically worthwile. Namely, Kronecker’s theorem states that if the topological degree is not zero then there exists a zero of a function inside the domain. Under more-restrictive assumptions one may also derive equivalence statements, i.e., nonzero degree is equivalent to the existence of a zero. By computing a sequence of domains with nonzero degrees and decreasing diameters one can obtain a region with arbitrarily small diameter that contains at least one zero of the function. Such methods, called generalized bisections, have been implemented and tested by several authors, as described in the annotations to this chapter. These methods have been touted as appropriate when the function is not smooth or cannot be evaluated accurately. For such functions they yield close approximations to roots in many cases for which all available other methods tested have failed (see annotations). The generalized bisection methods based on the degree computation are related to simplicial continuation methods. Their worst case complexity in general classes of functions is unbounded, as results of section 2.1.2 indicate; however, for tested functions they did converge. This suggests the need of average case analysis of such methods. There are numerous applications of the degree computation in nonlinear analysis. In addition to the existence of roots, the degree computation is used in methods for finding directions proceeding from bifurcation points in the solution of nonlinear functional differential equations as well as others as indicated in annotations. Algorithms proposed for the degree computation were tested on relatively small number of examples. The authors concluded that the degree of arbitrary continuous function could be computed. It was observed, however, that the algorithms could require an unbounded number of function evaluations. This is why in our work we restrict the functions to still relatively large class of functions satisfying the Lipschitz condition with a given constant K.


Author(s):  
Krzysztof A. Sikorski

Fixed point computation has been an intensive research area since 1967 when Scarf introduced simplicial algorithm to approximate fixed points. Several algorithms have been invented since then, including restart and homotopy methods. Most of these were designed to approximate fixed points of general maps and used the residual error criterion. In this chapter we consider the absolute and/or relative error criteria for contractive univariate and multivariate functions. The departure of our analysis is the classical Banach fixed point theorem. Namely, we consider a function f : D →D, where D is a closed subset of a Banach space B. We assume that f is contractive with a factor q < 1, i.e., . . . ||f(x) – f(y)|| ≤ q ||x-y||, for all x,y ∈ D. Then, there exists a unique ∝ = ∝ (f) ∈ D such that ∝ is a fixed point of f, ∝ = f (∝)


Author(s):  
Krzysztof A. Sikorski

In this chapter we address the problem of approximationg zeros ∝ of nonlinear function f, f (∝ ) = 0, where f ϵ F ⊂ {f : D ⊂ Rd →Rl}. In order to define our solution operators, we first review several error criteria that are commonly used to measure the quality of approximations to zeros of nonlinear equations. This is done for univariate function f : [a, b] → R. Straightforward generalizations to the mulivariate case are based on replacing the magnitude function by a specific norm. These are considered in section 2.2 when we review multivariate problems. A number of error criteria are used in practice for approximation of a zero ∝ of f.


Author(s):  
Krzysztof A. Sikorski

This monograph is devoted to studying worst case complexity results and optimal or nearly optimal methods for the approximation of solutions of nonlinear equations, approximation of fixed points, and computation of the topological degree. The methods are “global” in nature. They guarantee that the computed solution is within a specified error from the exact solution for every function in a given class. A common approach in numerical analysis is to study the rate of convergence and/or locally convergent methods that require special assumptions on the location of initial points of iterations to be “sufficiently” close to the actual solutions. This approach is briefly reviewed in the annotations to chapter 2, as well as in section 2.1.6, dealing with the asymptotic analysis of the bisection method. Extensive literature exists describing the iterative approach, with several monographs published over the last 30 years. We do not attempt a complete review of this work. The reader interested in this classical approach should consult the monographs listed in the annotations to chapter 2. We motivate our analysis and introduce basic notions in a simple example of zero finding for continuous function with different signs at the endpoints of an interval. Example 3.1 We want to approximate a zero of a function f from the class F = {f : [0,1] → R : f(0) ,< 0 and f(1) > 0, continuous}.By an approximate solution of this problem we understand any point x = x (f) such that the distance between x and some zero ∝ = ∝(f) of the function f , f (∝ ) = 0, is at most equal to a given small positive number ∈,|x — ∝ ≤ ∈. To compute x we first gather some information on the function f by sampling f at n sequentially chosen points ti in the interval [0,1]. Then, based on this information we select x. To minimize the time complexity we must select the minimal number of sampling points, that guarantee computing x(f) for any function f in the class F. This minimal number of samples (in the worst case) is called the information complexity of the problem.


Sign in / Sign up

Export Citation Format

Share Document