computational errors
Recently Published Documents


TOTAL DOCUMENTS

141
(FIVE YEARS 32)

H-INDEX

13
(FIVE YEARS 2)

Author(s):  
Vladislav Chori ◽  
Tetyana Shamanina ◽  
Vitaliy Pavlenko

Identification systems that use biometric characteristics to solve the problem of access to information systems are becoming more common. The article proposes a new method of biometric identification of computer systems users, based on the determination of the integral Volterra model of the human oculo-motor system (OMS) according to experimental research "input-output" using innovative eye tracking technology. With the help of the Tobii Pro TX300 eye tracker, the data of OMC responses to test visual stimuli were obtained, displayed as bright dots on the computer screen at different distances from the start position in the "horizontal" direction. Based on the data obtained, the transition functions of the first, second and third orders of the OMS for two people were determined. To construct a personality classifier, the informativeness of the proposed heuristic features, determined on the basis of the transition functions in terms of the probability of correct recognition (PCR), is investigated. Pairs of features are established that are resistant to computational errors and have a high PCR value - in the range 0.92 - 0.97. Fig.: 8. Table: 5. Bibliography: 30 items. Key words: biometric identification, personality recognition, Volterra model, oculo-motor system, eye tracking technology, informativeness of features, classification.


Author(s):  
S. Rumana Firdose

Abstract: During the development of software code there is a pressing necessity to remove the faults or bugs and improve software reliability. To get the accurate result, in every phase of software development cycle assessments needs to be happen, so that in each phase early bugs detection takes place that leads to maintain accuracy at each level. The academic institutions and industries are enhancing the development techniques in software engineering and their by performing regular testing for finding faults in programmers of software during the development. New programs are composed by altered the original code by comprised more of a bias near statements that arise in pessimistic execution paths. Fault localization information technique is used in proposed method to indicate the position of fault. In experimental as well as regression based equations represent the soft computing techniques results is better compare to the other techniques. Evaluation of soft-computing techniques represented that accuracy of the ANN model is superior to the other models. Data bases for performing the training and testing stages were collected, these soft computing techniques had low computational errors than the empirical equations. Finally says that soft computing models are better compare to the regression models. Hence, finding faults and correcting a serious software problem would be better instead of recalling thousands of products, especially in automotive sector. SRGM success mainly reliable by gathering the accurate failure information. The functions of the software reliability growth model were predicted in terms of such information gathered only. SRGM techniques in the literature and it gives a reasonable capability of value for actual software failure data. Therefore, this model, in future, can be applied to operate a wide range of software and its applications. Keywords: SRGM, FDP, FCP


2021 ◽  
Author(s):  
Emmanuel Ayodele ◽  
Ndubuisi Chukuigwe ◽  
Oshogwe Akpogomeh ◽  
Ibrahim Bilal

Abstract Petroleum product needs to be stored and transported from various sources before they get to the final consumers, there by requiring storage tanks. Calibration of storage tank by dry and wet strapping process have evolved over the years with state-of-the-art facilities. The calibration charts are used to determine and know the volume of fluid in a tank given the height of the petroleum products stored in the tank. Calculating volume of a tank with the integration method has a lot of sources of error thereby affecting the result of the volume calculated and causing losses in revenue due to inaccurate calibrated tanks. With the losses in revenue due to wrong computations or computational errors, a fast, dynamic and cost-effective solutions become imperative to solve these computation problems. The tank charts having been delivered for daily usage and fiscalization process after the tank strapping process, calculation errors need to be minimized in order to report accurately petroleum products in stocks, which is a function of temperature, density and volume correction factor. This paper aims to solve the problem by semi automating the process of calculating total volume of product in stock with error free results. Approach in this paper was used and test run for a storage facility X. This paper shows how calculations from calibrated tanks can be done with a virtual method using excel spreadsheet and converted into a software for effective use and making percentage error almost zero. The results obtained from this method of computation were error free and devoid of human errors.


Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 156
Author(s):  
Alexander J. Zaslavski

We study the behavior of inexact products of uniformly continuous self-mappings of a complete metric space that is uniformly continuous and bounded on bounded sets. It is shown that previously established convergence theorems for products of non-expansive mappings continue to hold even under the presence of computational errors.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Yury S. Osipov ◽  
Vyacheslav I. Maksimov

Abstract A second order nonlinear differential equation is considered. An algorithm for reconstructing an input from inaccurate measurements of the solution at discrete times is designed. The algorithm based on the constructions of feedback control theory and theory of ill-posed problems is stable with respect to informational noises and computational errors.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Mostafa Ghadampour ◽  
Donal O’Regan ◽  
Ebrahim Soori ◽  
Ravi P. Agarwal

In this paper, we study the strong convergence of an algorithm to solve the variational inequality problem which extends a recent paper (Thong et al., Numerical Algorithms. 78, 1045-1060 (2018)). We reduce and refine some of their algorithm conditions and we prove the convergence of the algorithm in the presence of some computational errors. Then, using the MATLAB software, the result will be illustrated with some numerical examples. Also, we compare our algorithm with some other well-known algorithms.


2021 ◽  
Vol 24 (3) ◽  
pp. 895-922
Author(s):  
Platon G. Surkov

Abstract A specific formulation of the “classical” problem of mathematical analysis is considered. This is the problem of calculating the derivative of a function. The purpose of this work is to construct an algorithm for the approximate calculation of the Caputo-type fractional derivative based on the methods of control theory. The input data of the algorithm is represented by inaccurate measured function values at discrete, frequently enough, times. The proposed algorithm is based on two aspects: a local modification of the Tikhonov regularization method from the theory of ill-posed problems and the Krasovskii extremal shift method from the guaranteed control theory, both of which ensure the stability to informational noises and computational errors. Numerical experiments were carried out to illustrate the operation of the algorithm.


Author(s):  
Jekaterina Aleksejeva ◽  
Sharif Guseynov

In the present paper, on the basis of the theory of inverse and ill-posed problems, an algorithm is proposed that allows to unambiguously determine the stoichiometric coefficients in the equations of chemical reactions of any type, including redox reactions and acid-base reactions, and, regardless of whether the constructed system of linear algebraic equations for the desired stoichiometric coefficients is underdetermined (i.e. there are fewer equations than unknowns) or overdetermined (i.e. there are more equations than unknowns). The proposed algorithm is a regularized algorithm (according to Tikhonov), which ensures that, in a computer implementation, possible computational errors will not make the comprised system of linear algebraic equations to be incapable of solving.


2021 ◽  
Vol 3 (1) ◽  
pp. 1-15
Author(s):  
Yuxian Huang ◽  
Ying Zhou ◽  
Yong Li

This research analyzes the types and reasons of students'  mistakes in solving probability and the analysis statistics problems by qualitative research method. The subjects were 20 senior High school students from a senior high school in Guangxi, China. The data were collected through the student diagnostic test. The students' answers were analyzed by using O'Connel's  analysis. The results show that the proportion of misunderstood problems is 48.18% at the largest proportion, and the proportion of computational errors is next, accounting for 36.36%. The proportion of procedural errors is the least, accounting for 15.45%. As we all know, there are many reasons for the above mistakes., so teachers can find some solutions to overcome these mistakes.


Sign in / Sign up

Export Citation Format

Share Document