the best approximation
Recently Published Documents


TOTAL DOCUMENTS

364
(FIVE YEARS 90)

H-INDEX

22
(FIVE YEARS 3)

2022 ◽  
Vol 17 (1) ◽  
Author(s):  
Luiz Augusto G. Silva ◽  
Luis Antonio B. Kowada ◽  
Noraí Romeu Rocco ◽  
Maria Emília M. T. Walter

Abstract Background sorting by transpositions (SBT) is a classical problem in genome rearrangements. In 2012, SBT was proven to be $$\mathcal {NP}$$ NP -hard and the best approximation algorithm with a 1.375 ratio was proposed in 2006 by Elias and Hartman (EH algorithm). Their algorithm employs simplification, a technique used to transform an input permutation $$\pi$$ π into a simple permutation$${\hat{\pi }}$$ π ^ , presumably easier to handle with. The permutation $${\hat{\pi }}$$ π ^ is obtained by inserting new symbols into $$\pi$$ π in a way that the lower bound of the transposition distance of $$\pi$$ π is kept on $${\hat{\pi }}$$ π ^ . The simplification is guaranteed to keep the lower bound, not the transposition distance. A sequence of operations sorting $${\hat{\pi }}$$ π ^ can be mimicked to sort $$\pi$$ π . Results and conclusions First, using an algebraic approach, we propose a new upper bound for the transposition distance, which holds for all $$S_n$$ S n . Next, motivated by a problem identified in the EH algorithm, which causes it, in scenarios involving how the input permutation is simplified, to require one extra transposition above the 1.375-approximation ratio, we propose a new approximation algorithm to solve SBT ensuring the 1.375-approximation ratio for all $$S_n$$ S n . We implemented our algorithm and EH’s. Regarding the implementation of the EH algorithm, two other issues were identified and needed to be fixed. We tested both algorithms against all permutations of size n, $$2\le n \le 12$$ 2 ≤ n ≤ 12 . The results show that the EH algorithm exceeds the approximation ratio of 1.375 for permutations with a size greater than 7. The percentage of computed distances that are equal to transposition distance, computed by the implemented algorithms are also compared with others available in the literature. Finally, we investigate the performance of both implementations on longer permutations of maximum length 500. From the experiments, we conclude that maximum and the average distances computed by our algorithm are a little better than the ones computed by the EH algorithm and the running times of both algorithms are similar, despite the time complexity of our algorithm being higher.


2022 ◽  
Author(s):  
Rocco Pierri ◽  
Giovanni Leone ◽  
Fortuna Munno ◽  
Raffaele Solimene

In this paper we introduce a sampling scheme based on the application of an inverse source problem approach to the far field radiated by a conformal current source. The regularized solution of the problem requires the computation of the Singular Value Decomposition (SVD) of the relevant linear operator, leading to introduce the Point Spread Function in the observation domain, which can be related to the capability of the source to radiate a focusing beam. Then, the application of the Kramer generalized sampling theorem allows introducing a non-uniform discretization of the angular observation domain, tailored to each source geometry. The nearly optimal property of the scheme is compared with the best approximation achievable under a regularized inversion of the pertinent SVD. Numerical results for different two-dimensional curve sources show the effectiveness of the approach with respect to standard sampling approaches with uniform spacing, since it allows to reduce the number of sampling points of the far field.


2022 ◽  
Author(s):  
Rocco Pierri ◽  
Giovanni Leone ◽  
Fortuna Munno ◽  
Raffaele Solimene

In this paper we introduce a sampling scheme based on the application of an inverse source problem approach to the far field radiated by a conformal current source. The regularized solution of the problem requires the computation of the Singular Value Decomposition (SVD) of the relevant linear operator, leading to introduce the Point Spread Function in the observation domain, which can be related to the capability of the source to radiate a focusing beam. Then, the application of the Kramer generalized sampling theorem allows introducing a non-uniform discretization of the angular observation domain, tailored to each source geometry. The nearly optimal property of the scheme is compared with the best approximation achievable under a regularized inversion of the pertinent SVD. Numerical results for different two-dimensional curve sources show the effectiveness of the approach with respect to standard sampling approaches with uniform spacing, since it allows to reduce the number of sampling points of the far field.


Mathematics ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 145
Author(s):  
Haojie Lv ◽  
Guixiang Wang

Using simple fuzzy numbers to approximate general fuzzy numbers is an important research aspect of fuzzy number theory and application. The existing results in this field are basically based on the unweighted metric to establish the best approximation method for solving general fuzzy numbers. In order to obtain more objective and reasonable best approximation, in this paper, we use the weighted distance as the evaluation standard to establish a method to solve the best approximation of general fuzzy numbers. Firstly, the conceptions of I-nearest r-s piecewise linear approximation (in short, PLA) and the II-nearest r-s piecewise linear approximation (in short, PLA) are introduced for a general fuzzy number. Then, most importantly, taking weighted metric as a criterion, we obtain a group of formulas to get the I-nearest r-s PLA and the II-nearest r-s PLA. Finally, we also present specific examples to show the effectiveness and usability of the methods proposed in this paper.


2021 ◽  
Vol 13 (3) ◽  
pp. 750-763
Author(s):  
Z. Cakir ◽  
C. Aykol ◽  
V.S. Guliyev ◽  
A. Serbetci

In this paper we investigate the best approximation by trigonometric polynomials in the variable exponent weighted Morrey spaces ${\mathcal{M}}_{p(\cdot),\lambda(\cdot)}(I_{0},w)$, where $w$ is a weight function in the Muckenhoupt $A_{p(\cdot)}(I_{0})$ class. We get a characterization of $K$-functionals in terms of the modulus of smoothness in the spaces ${\mathcal{M}}_{p(\cdot),\lambda(\cdot)}(I_{0},w)$. Finally, we prove the direct and inverse theorems of approximation by trigonometric polynomials in the spaces ${\mathcal{\widetilde{M}}}_{p(\cdot),\lambda(\cdot)}(I_{0},w),$ the closure of the set of all trigonometric polynomials in ${\mathcal{M}}_{p(\cdot),\lambda(\cdot)}(I_{0},w)$.


Machines ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 372
Author(s):  
Iván González Castillo ◽  
Igor Loboda ◽  
Juan Luis Pérez Ruiz

The lack of gas turbine field data, especially faulty engine data, and the complexity of fault embedding into gas turbines on test benches cause difficulties in representing healthy and faulty engines in diagnostic algorithms. Instead, different gas turbine models are often used. The available models fall into two main categories: physics-based and data-driven. Given the models’ importance and necessity, a variety of simulation tools were developed with different levels of complexity, fidelity, accuracy, and computer performance requirements. Physics-based models constitute a diagnostic approach known as Gas Path Analysis (GPA). To compute fault parameters within GPA, this paper proposes to employ a nonlinear data-driven model and the theory of inverse problems. This will drastically simplify gas turbine diagnosis. To choose the best approximation technique of such a novel model, the paper employs polynomials and neural networks. The necessary data were generated in the GasTurb software for turboshaft and turbofan engines. These input data for creating a nonlinear data-driven model of fault parameters cover a total range of operating conditions and of possible performance losses of engine components. Multiple configurations of a multilayer perceptron network and polynomials are evaluated to find the best data-driven model configurations. The best perceptron-based and polynomial models are then compared. The accuracy achieved by the most adequate model variation confirms the viability of simple and accurate models for estimating gas turbine health conditions.


Author(s):  
В. Б. Бетелин ◽  
В. А. Галкин

Предложен общий топологический подход для анализа искусственных нейронных сетей на основе симплициальных комплексов и свойств аппроксимации непрерывных отображений их симплициальными приближениями. Выявлены существенные для этого класса задач явления вычислительной неустойчивости, связанной с общими проблемами некорректных задач в гильбертовом пространстве и методами их регуляризации, типичными для обработки Big Data. Сформулированы критерии точности и применимости моделей искусственных нейронных сетей, рассмотрены примеры их реализации на основе теории интерполяции функций. Развитие идей П.Л.Чебышёва о наилучшем приближении служит отправной точкой для широкого класса математических исследований по оптимизации обучающих наборов для построения ИНС. We propose a general topological approach to the analysis of artificial neural networks using simplicial complexes and the approximation of continuous mappings with simplicial ones. The essential properties of numerical instability in such problems were identified. It is associated with ill-posed problems in Hilbert space and regularization methods typically applied to Big Data processing. We formulated the criteria of artificial neural network accuracy and applicability and included some implementation examples based on the interpolation theory. Advancing P.L. Chebyshev’s ideas about the best approximation may be an entry point to various mathematical research on artificial neural network training dataset optimization.  


2021 ◽  
Author(s):  
Alexandr N Tetearing

In this work, based on real data on the size of the eyeball (in a fetus, in a child, and in young people under 20), we constructed a model function of the growth of the retinal cell tissue. We used this function to construct a theoretical age distribution of retinoblastomas. We constructed theoretical age distributions for four different models of retinoblastoma: a complex mutational model, a third mutational model, a model with a sequence of key events, and a model of a single oncogenic event with two different latencies (hereditary and non-hereditary retinoblastoma). We compared the theoretical age distribution of retinoblastomas with the real age distribution based on SEER data (Surveillance Epidemiology and End Results; program of the American National Cancer Institute). In total, we examined 843 cases in women and 908 cases in men. For all models (separately for women and men), we obtained estimates of the following cancer parameters: the specific frequency of key events (events that trigger cancer); the duration of the latency period of cancer; the number of key events required for cancer to occur. For the composite age distributions, we calculated the theoretical mean age at diagnosis for hereditary and non-hereditary retinoblastomas. The best approximation accuracy (for male and female forms of retinoblastoma) is shown by a model with a sequence of key events.


Author(s):  
Cristina Bazgan ◽  
Stefan Ruzika ◽  
Clemens Thielen ◽  
Daniel Vanderpooten

AbstractWe determine the power of the weighted sum scalarization with respect to the computation of approximations for general multiobjective minimization and maximization problems. Additionally, we introduce a new multi-factor notion of approximation that is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. For minimization problems, we provide an efficient algorithm that computes an approximation of a multiobjective problem by using an exact or approximate algorithm for its weighted sum scalarization. In case that an exact algorithm for the weighted sum scalarization is used, this algorithm comes arbitrarily close to the best approximation quality that is obtainable by supported solutions – both with respect to the common notion of approximation and with respect to the new multi-factor notion. Moreover, the algorithm yields the currently best approximation results for several well-known multiobjective minimization problems. For maximization problems, however, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously by supported solutions.


Sign in / Sign up

Export Citation Format

Share Document