numerical estimate
Recently Published Documents


TOTAL DOCUMENTS

112
(FIVE YEARS 23)

H-INDEX

15
(FIVE YEARS 2)

Author(s):  
A.P. Liabakh ◽  
O.A. Turchyn ◽  
V.M. Piatkovskyi ◽  
I.V. Kucher

Summary. The assessment of foot and ankle function still remains an actual issue of the modern orthopedics. Objective: comparative qualitative analysis of the most common assessment systems of foot and ankle function. Materials and Methods. The search from PubMed databases from 1946 to 2021 was done. 8898 publications were detected in which assessment systems of foot and ankle function have been used. 12 assessment systems presented in 5705 publications were selected for analysis (inclusion criterion – no less than 40 publications): AOFAS scale, VAS, SF-36 EQL, FFI, FAOS, FAAM, FADI, BFS, MOFAQ, FFI-R, Roles&Maudsley scale, VAS FA. The analysis predicted the assessment system philosophy: numerical estimate, VAS, Likert scale, patient- or investigatororiented, and reliability evidence. Results. Most of the analized assessment systems meet criteria of reliability (r>0.8; Kronbach’s α≥0.9). For Roles&Maudsley scale and VAS, FA reliability has not been established. The validity fluctuates widely. Conclusions. The choice of an assessment system must meet the research tasks. The consideration of strong and weak sides of assessment systems promotes their adequate combinations to avoid the bias effect.


Author(s):  
Y. I. Golub ◽  
F. V. Starovoitov

The goal of the studies described in the paper is to find a quantitative assessment that maximally correlates with the subjective assessment of the contrast image quality in the absence of reference image. As a result of the literature analysis, 16 functions were selected that are used for no-refernce image quality assessment: BEGH, BISH, BREN, CMO, CURV, FUS, HELM, EBCM, KURT, LAPD, LAPL, LAPM, LOCC, LOEN, SHAR, WAVS. They all use the arithmetical mean of the local contrast quality. As an alternative to averaging local estimates (since the mean is one of two parameters of the normal distribution), it is proposed to use one of two parameters of the Weibull distribution of the same data – scale or shape.For the experiments, digital images with nonlinear contrast distortion from the available CCID2014 database were used. It contains 15 original images with a size of 768x512 pixels and 655 versions with modified contrast. This database of images contains the average visual quality assessment (Mean Opinion Score, briefly MOS) of each image. Spearman’s rank correlation coefficient was used to determine the correspondence between the visual MOS scores and the studied quantitative measures.As a result of the research, a new quality assessment measure of contrast images in the absence of references is presented. To obtain the estimate, local quality values are calculated by the BREN measure, their set is described by the Weibull distribution, and the scale parameter of the distribution serves as the best numerical estimate of the quality of contrast images. This conclusion is confirmed experimentally, and the proposed measure correlates better than other variants with the subjective assessments of experts.


2021 ◽  
Vol 81 (8) ◽  
Author(s):  
G. Colangelo ◽  
F. Hagelstein ◽  
M. Hoferichter ◽  
L. Laub ◽  
P. Stoffer

AbstractWe reassess the impact of short-distance constraints for the longitudinal component of the hadronic light-by-light amplitude on the anomalous magnetic moment of the muon, $$a_\mu =(g-2)_\mu /2$$ a μ = ( g - 2 ) μ / 2 , by comparing different solutions that have recently appeared in the literature. In particular, we analyze the relevance of the exact axial anomaly and its impact on $$a_\mu $$ a μ and conclude that it remains rather limited. We show that all recently proposed solutions agree well within uncertainties on the numerical estimate of the impact of short-distance constraints on $$a_\mu $$ a μ , despite differences in the concrete implementation. We also take into account the recently calculated perturbative corrections to the massless quark loop to update our estimate and outline the path towards future improvements.


Author(s):  
Mykola Kuznietsov ◽  
Olha Lysenko ◽  
Oleksandr Melnyk

The paper is devoted to solving the balancing problem in local power systems with renewable energy sources. For a power system optimization problem, whose operation depends on random weather factors, a convex parameter optimization or optimal control problem was solved using controlled generation, for each individual realization of a random process as a deterministic function, and then statistical processing of results over a set of random realizations was performed and distribution density functions of the desired target function were constructed, followed by estimation of expected values and their confidence intervals. The process describing current deviations of generated power from mean value is modelled as discrete stray model and has properties of Ornstein-Uhlenbeck process, which allowed varying the duration of unit interval, in particular to select data bases of operating objects with inherent temporal discreteness of their monitoring systems. Random components are investigated and modelled, while the average values are considered to be deterministic and are provided within a predictable schedule using also traditional energy sources (centralised power grid). A mathematical model of the combined operation of renewable energy sources in a system with variable load, electric storage device and auxiliary regulating generator is implemented as a scheme of sequential generation and consumption models and random processes describing the current state of the power system. The operation of the electricity accumulators is dependent on the processes mentioned, but in the full balance, it appears together with generation or load losses, which are cumulative sums of unbalanced power and may have a different distribution from the normal one. However, these processes are internal, relating to the redistribution of energy within a generation system whose capacity is generally described satisfactorily, given the relevant criteria, by a normal law. Under this condition, it is possible to estimate the probability of different circumstances - over- or under-generation, that is, to give a numerical estimate of the reliability of energy supply.


Risks ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 40
Author(s):  
Jiro Hodoshima ◽  
Toshiyuki Yamawake

We examine how sensitive the new performance indexes incorporating high moments and disaster risk are to disaster risk. The new performance indexes incorporating high moments and disaster risk are the Aumann-Serrano performance index and Foster-Hart performance index proposed by Kadan and Liu. These performance indexes provide evaluations sensitive to the underlying risk. We show, by numerical examples and empirical examples, how sensitive these indexes are to disaster risk. Although these indexes are known to be either quite sensitive or excessively sensitive to disaster risk or maximum loss in the literature, we show by the regression analysis of the index and summary statistics these indexes are in fact not excessively sensitive to maximum loss in representative stock data, which contain disastrous observations. The numerical estimate of the Foster-Hart performance index is found to be effective in showing the performance index. Our analysis suggests these indexes can handle various empirical data containing quite disastrous observations.


Author(s):  
Л. М. Березін

Purpose: development of a methodology for operational assessment of the influence of innovative solutions to the technical and operational characteristics of individual mechanisms and systems on the reliability of the sock automatic machine on a posteriori information about failures in production.Methodology:  the method of search, description, analogies and analysis of information is used to audit the set of possible solutions to the subject of research, the basics of reliability theory, methods for assessing reliability indicators based on experimental data, matrix theory, elements of numerical methods and methodology of a posteriori reliability analysis of structurally complex technical systems. Findings: the calculation algorithm and mathematical support for operational assessment of influence of technical and operational changes of one of mechanisms on reliability of the sock automatic machine as a whole in the conditions of uncertainty of the information about failures and sources of its receipt are presented. The advantages of the proposed approach in comparison with the traditional one are shown, which allows to reduce the design duration while ensuring the required quality and minimizing costs by limiting tests and calculations. A numerical estimate of the degree of influence on the reliability of the sock automatic machine of changes in the design of the knitting mechanism is obtained. It is shown that the increase in the average failure time to 24.82 hours, when changing the technical characteristics of the knitting mechanism leads to an increase in this indicator for the machine as a whole to 1.24 hours. Originality: it is the further development of the theory and methodology of reliability analysis of sock automatic machines at the stages of design or modernization in cases of controlled variety of options for innovative mechanisms with limited information about failures and while maintaining functional and structural relationships. Practical value: the method of modeling the reliability of the sock automatic machine according to the innovative solutions of its mechanisms is proposed, which minimizes the costs of additional tests and calculations. The results of the above concept of the analysis of the reliability of the machine confirmed the sufficient accuracy of the calculations at the stage before the design preparation, which allows it to be used for other knitting machines.


2021 ◽  
Vol 80 (3) ◽  
pp. 132-138
Author(s):  
I.A. Novakov ◽  
◽  
E.S. Bochkarev ◽  
Minh Thuy Dang ◽  
O.O. Tuzhikov ◽  
...  

The authors compared the results of determining the ozone resistance (OR) and weather resistance (WR) obtained in laboratory and natural conditions of tropical climate. The samples were manufactured with unsaturated rubbers (nitrile butadiene rubbers BNKS-18AN, BNKS-28AN, BNKS-40AN and SKN-26PVC-30, styrene butadiene rubber SKMS-30 ARKM, isoprene SKI-3 and butadiene SKD-ND). All the samples were not filled with antioxidants or antiozonants. The curing process at 160 °C and the total crosslink density (νt) were studied using a MonTech MDR 3000 Professional rotorless rheometer. The physical and mechanical properties of the vulcanizates were determined with the using of a Zwick\Roell Z010 tensile testing machine at a test speed of 500 mm / min. Hardness and tear resistance were also determined. The numerical estimate of νt was obtained from the data on the dynamic shear modulus. Density of chemical bonds νch was determined from the data of equilibrium swelling in toluene according to the Flory-Rener equation. Full-scale climatic tests in tropical climate in the south of the Republic of Vietnam were carried out for 9 months in accordance with GOST 9.066-76 with initial deformations of 10, 20, 30, 40 %. The sample stand was oriented south, at a 45 degree angle. Cracks formed on the rubber surface were recorded by photography. The time before the onset of fracture Tof and the time until the appearance of the first cracks Tfc were determined. It is shown that samples of rubbers with high OR in laboratory tests had the best WR. A correlation of νch with Tof and Tfc was revealed. The obtained results show the possibility of using the express method for assessing the OR of rubbers in the development of formulations for tropical climates.


Author(s):  
Timofey Samsonov ◽  
Olga Yakimova

The paper reveals dependencies between the character of the line shape and combination of constraining metrics that allows comparable reduction in detail by different geometric simplification algorithms. The study was conducted in a form of the expert survey. geometrically simplified versions of three coastline fragments were prepared using three different geometric simplification algorithms—Douglas-peucker, Visvalingam-Whyatt and Li-Openshaw. Simplification was constrained by similar value of modified hausdorff distance (linear offset) and similar reduction of number of line bends (compression of the number of detail elements). Respondents were asked to give a numerical estimate of the detail of each image, based on personal perception, using a scale from one to ten. The results of the survey showed that lines perceived by respondents as having similar detail can be obtained by different algorithms. however, the choice of the metric used as a constraint depends on the nature of the line. Simplification of lines that have a shallow hierarchy of small bends is most effectively constrained by linear offset. As the line complexity increases, the compression metric for the number of detail elements (bends) increases its influence in the perception of detail. For one of the three lines, the best result was consistently obtained with a weighted combination of the analyzed metrics as a constraint. None of the survey results showed that only reducing the number of bends can be used as an effective characteristic of similar reduction in detail. It was therefore found that the linear offset metric is more indicative when describing changes in line detail.


2021 ◽  
Vol 347 ◽  
pp. 00012
Author(s):  
Aravind Arunakirinathar ◽  
Jean-Francois de la Beaujardiere ◽  
Michael Brooks

In order to assess the capabilities of South Africa as a launch site for commercial satellites, an optimal control solver was developed. The developed solver makes use of direct Hermite-Simpson collocation methods, and can be applied to a general optimal control problem. Analytical first derivative information was obtained for direct Hermite-Simpson collocation methods. Typically, a numerical estimate of the derivative information is used. This paper will present the solver algorithm, and the formulation and derivation of the analytical first derivative information for this approach. A sample problem is provided as validation of the solver.


2020 ◽  
Vol 2 (12) ◽  
Author(s):  
C. Mascia ◽  
P. Moschetta

AbstractThis paper deals with the numerical approximation of a stick–slip system, known in the literature as Burridge–Knopoff model, proposed as a simplified description of the mechanisms generating earthquakes. Modelling of friction is crucial and we consider here the so-called velocity-weakening form. The aim of the article is twofold. Firstly, we establish the effectiveness of the classical Predictor–Corrector strategy. To our knowledge, such approach has never been applied to the model under investigation. In the first part, we determine the reliability of the proposed strategy by comparing the results with a collection of significant computational tests, starting from the simplest configuration to the more complicated (and more realistic) ones, with the numerical outputs obtained by different algorithms. Particular emphasis is laid on the Gutenberg–Richter statistical law, a classical empirical benchmark for seismic events. The second part is inspired by the result by Muratov (Phys Rev 59:3847–3857, 1999) providing evidence for the existence of traveling solutions for a corresponding continuum version of the Burridge–Knopoff model. In this direction, we aim to find some appropriate estimate for the crucial object describing the wave, namely its propagation speed. To this aim, motivated by LeVeque and Yee (J Comput Phys 86:187–210, 1990) (a paper dealing with the different topic of conservation laws), we apply a space-averaged quantity (which depends on time) for determining asymptotically an explicit numerical estimate for the velocity, which we decide to name LeVeque–Yee formula after the authors’ name of the original paper. As expected, for the Burridge–Knopoff, due to its inherent discontinuity of the process, it is not possible to attach to a single seismic event any specific propagation speed. More regularity is expected by performing some temporal averaging in the spirit of the Cesàro mean. In this direction, we observe the numerical evidence of the almost convergence of the wave speeds for the Burridge–Knopoff model of earthquakes.


Sign in / Sign up

Export Citation Format

Share Document