true values
Recently Published Documents


TOTAL DOCUMENTS

382
(FIVE YEARS 120)

H-INDEX

29
(FIVE YEARS 4)

Author(s):  
K. N. Danilovskii ◽  
Loginov G. N.

This article discusses a new approach to processing lateral scanning logging while drilling data based on a combination of three-dimensional numerical modeling and convolutional neural networks. We prepared dataset for training neural networks. Dataset contains realistic synthetic resistivity images and geoelectric layer boundary layouts, obtained based on true values of their spatial orientation parameters. Using convolutional neural networks two algorithms have been developed and programmatically implemented: suppression of random noise and detection of layer boundaries on the resistivity images. The developed algorithms allow fast and accurate processing of large amounts of data, while, due to the absence of full-connection layers in the neural networks’ architectures, it is possible to process resistivity images of arbitrary length.


2022 ◽  
Vol 18 (1) ◽  
pp. e1009610
Author(s):  
Arno Strouwen ◽  
Bart M. Nicolaï ◽  
Peter Goos

Dynamic models based on non-linear differential equations are increasingly being used in many biological applications. Highly informative dynamic experiments are valuable for the identification of these dynamic models. The storage of fresh fruit and vegetables is one such application where dynamic experimentation is gaining momentum. In this paper, we construct optimal O2 and CO2 gas input profiles to estimate the respiration and fermentation kinetics of pear fruit. The optimal input profiles, however, depend on the true values of the respiration and fermentation parameters. Locally optimal design of input profiles, which uses a single initial guess for the parameters, is the traditional method to deal with this issue. This method, however, is very sensitive to the initial values selected for the model parameters. Therefore, we present a robust experimental design approach that can handle uncertainty on the model parameters.


Author(s):  
Jose María Abril

Lead-210 from natural atmospheric fallout is widely used in multidisciplinary studies to date recent sediments. Some of the 210Pb-based dating models can produce historical records of sediment accumulation rates (SAR) and initial activity concentrations ( ). The former have been profusely used to track past changes in the sedimentary conditions. Both physical magnitudes are differently affected by model errors (those arising for the partial or null accomplishment of some model assumptions). This work is aimed at assessing the effects on SAR and of model errors in the CRS, CS, PLUM and TERESA dating models, due to random variability in 210Pb fluxes, which is a usual sedimentary condition. Synthetic cores are used as virtual laboratories for this goal. Independently of the model choice, SARs are largely affected by model errors, resulting in some large and spurious deviations from the true values. This questions their general use for tracking past environmental changes. are less sensitive to model errors and their trends of change with time may reflect real changes in sedimentary conditions, as it is shown with some real cores from varved sediments.


2022 ◽  
pp. 096228022110651
Author(s):  
Mohammed Baragilly ◽  
Brian Harvey Willis

Tailored meta-analysis uses setting-specific knowledge for the test positive rate and disease prevalence to constrain the possible values for a test's sensitivity and specificity. The constrained region is used to select those studies relevant to the setting for meta-analysis using an unconstrained bivariate random effects model (BRM). However, sometimes there may be no studies to aggregate, or the summary estimate may lie outside the plausible or “applicable” region. Potentially these shortcomings may be overcome by incorporating the constraints in the BRM to produce a constrained model. Using a penalised likelihood approach we developed an optimisation algorithm based on co-ordinate ascent and Newton-Raphson iteration to fit a constrained bivariate random effects model (CBRM) for meta-analysis. Using numerical examples based on simulation studies and real datasets we compared its performance with the BRM in terms of bias, mean squared error and coverage probability. We also determined the ‘closeness’ of the estimates to their true values using the Euclidian and Mahalanobis distances. The CBRM produced estimates which in the majority of cases had lower absolute mean bias and greater coverage probability than the BRM. The estimated sensitivities and specificity for the CBRM were, in general, closer to the true values than the BRM. For the two real datasets, the CBRM produced estimates which were in the applicable region in contrast to the BRM. When combining setting-specific data with test accuracy meta-analysis, a constrained model is more likely to yield a plausible estimate for the sensitivity and specificity in the practice setting than an unconstrained model.


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 73
Author(s):  
Dragana Bajić ◽  
Nina Japundžić-Žigon

Approximate and sample entropies are acclaimed tools for quantifying the regularity and unpredictability of time series. This paper analyses the causes of their inconsistencies. It is shown that the major problem is a coarse quantization of matching probabilities, causing a large error between their estimated and true values. Error distribution is symmetric, so in sample entropy, where matching probabilities are directly summed, errors cancel each other. In approximate entropy, errors are accumulating, as sums involve logarithms of matching probabilities. Increasing the time series length increases the number of quantization levels, and errors in entropy disappear both in approximate and in sample entropies. The distribution of time series also affects the errors. If it is asymmetric, the matching probabilities are asymmetric as well, so the matching probability errors cease to be mutually canceled and cause a persistent entropy error. Despite the accepted opinion, the influence of self-matching is marginal as it just shifts the error distribution along the error axis by the matching probability quant. Artificial lengthening the time series by interpolation, on the other hand, induces large error as interpolated samples are statistically dependent and destroy the level of unpredictability that is inherent to the original signal.


2021 ◽  
Author(s):  
Rahul Bhui ◽  
Yang Xiang

The attraction effect occurs when the presence of an inferior option (the decoy) increases the attractiveness of the option that dominates it (the target). Despite its prominence in behavioral science, recent evidence points to the puzzling existence of the opposite phenomenon—a repulsion effect. In this paper, we formally develop and experimentally test a normative account of the repulsion effect. Our theory is based on the idea that the true values of options are uncertain and must be inferred from available information, which includes the properties of other options. A low-value decoy can signal that the target also has low value when both are believed to be generated by a similar process. We formalize this logic using a hierarchical Bayesian cognitive model that makes predictions about how the strength of the repulsion effect should vary with statistical properties of the decision problem. This theory may help account for several documented phenomena linked to the repulsion effect across both economic and perceptual decision making, as well as new experimental data. Our results shed light on the key drivers of context-dependent judgment across multiple domains and sharpen our understanding of when decoys can be detrimental.


2021 ◽  
Author(s):  
Ye Xiaoming

Abstract In measurement practice, the residuals in least squares adjustment usually show various abnormal discrete distributions, including outliers, which is not conducive to the optimization of final measured values. Starting with the physical mechanism of dispersion and outlier of repeated observation errors, this paper puts forward the error correction idea of using the approximate function model of error to approach the actual function model of error step by step, gives a new theoretical method to optimize the final measured values, and proves the effectiveness of the algorithm by the ability of responding to the true values. This new idea is expected to be the ultimate answer of robust estimation theory.


Author(s):  
Valerii Dmitrienko ◽  
Sergey Leonov ◽  
Mykola Mezentsev

The idea of ​​Belknap's four-valued logic is that modern computers should function normally not only with the true values ​​of the input information, but also under the conditions of inconsistency and incompleteness of true failures. Belknap's logic introduces four true values: T (true - true), F (false - false), N (none - nobody, nothing, none), B (both - the two, not only the one but also the other).  For ease of work with these true values, the following designations are introduced: (1, 0, n, b). Belknap's logic can be used to obtain estimates of proximity measures for discrete objects, for which the functions Jaccard and Needhem, Russel and Rao, Sokal and Michener, Hamming, etc. are used. In this case, it becomes possible to assess the proximity, recognition and classification of objects in conditions of uncertainty when the true values ​​are taken from the set (1, 0, n, b). Based on the architecture of the Hamming neural network, neural networks have been developed that allow calculating the distances between objects described using true values ​​(1, 0, n, b). Keywords: four-valued Belknap logic, Belknap computer, proximity assessment, recognition and classification, proximity function, neural network.


2021 ◽  
Vol 3 (4) ◽  
pp. 58-61
Author(s):  
Alesia Ivanovna Gurchenko

The aim of the research is to comprehend the folklore traditions embodied in art as an antithesis to the aesthetics of the postmodern era. The author puts forward a hypothesis according to which, against the background of a change in the socio-cultural paradigm from modernity to postmodernism, folklorism acted as a tool for returning to genuine values ​​based on centuries-old authentic traditions. Folklorism is considered in the article as a phenomenon in which the potential of the collective memory of the people is laid, allowing a person to recognize himself as part of a single nation with deep historical roots and centuries-old cultural traditions. Research methods – the comprehension of folklorism as an antithesis in relation to the dominant sociocultural paradigms was carried out using the scientific principle of historicism, as well as the methods of synchronous and diachronous analysis. The results of the research – another surge of interest in authenticity against the background of the approval of postmodern ideas is presented as one of the examples of a cyclical return to true values ​​in art. It is concluded that the analyzed bright and self-sufficient artistic phenomenon, over more than two centuries of its existence, has repeatedly acted as a mechanism to smooth out the existing contradictions in the sociocultural model of the development of society, thereby creating stable axiological foundations in art.


2021 ◽  
Vol 9 (12) ◽  
pp. 1461
Author(s):  
Jose M. Gonzalez-Ondina ◽  
Lewis Sampson ◽  
Georgy I. Shapiro

Data assimilation methods are an invaluable tool for operational ocean models. These methods are often based on a variational approach and require the knowledge of the spatial covariances of the background errors (differences between the numerical model and the true values) and the observation errors (differences between true and measured values). Since the true values are never known in practice, the error covariance matrices containing values of the covariance functions at different locations, are estimated approximately. Several methods have been devised to compute these matrices, one of the most widely used is the one developed by Hollingsworth and Lönnberg (H-L). This method requires to bin (combine) the data points separated by similar distances, compute covariances in each bin and then to find a best fit covariance function. While being a helpful tool, the H-L method has its limitations. We have developed a new mathematical method for computing the background and observation error covariance functions and therefore the error covariance matrices. The method uses functional analysis which allows to overcome some shortcomings of the H-L method, for example, the assumption of statistical isotropy. It also eliminates the intermediate steps used in the H-L method such as binning the innovations (differences between observations and the model), and the computation of innovation covariances for each bin, before the best-fit curve can be found. We show that the new method works in situations where the standard H-L method experiences difficulties, especially when observations are scarce. It gives a better estimate than the H-L in a synthetic idealised case where the true covariance function is known. We also demonstrate that in many cases the new method allows to use the separable convolution mathematical algorithm to increase the computational speed significantly, up to an order of magnitude. The Projection Method (PROM) also allows computing 2D and 3D covariance functions in addition to the standard 1D case.


Sign in / Sign up

Export Citation Format

Share Document