discrete sample
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 13)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Vol 101 (1) ◽  
pp. 78-86
Author(s):  
V.P. Kvasnikov ◽  
◽  
S.V. Yehorov ◽  
T.Yu. Shkvarnytska ◽  
◽  
...  

The problem of determining the properties of the object by analyzing the numerical and qualitative characteristics of a discrete sample is considered. A method has been developed to determine the probability of trouble-free operation of electronic systems for the case if the interpolation fields are different between several interpolation nodes. A method has been developed to determine the probability of trouble-free operation if the interpolation polynomial is the same for the entire interpolation domain. It is shown that local interpolation methods give more accurate results, in contrast to global interpolation methods. It is shown that in the case of global interpolation it is possible to determine the value of the function outside the given values by extrapolation methods, which makes it possible to predict the probability of failure. It is shown that the use of approximation methods to determine the probability of trouble-free operation reduces the error of the second kind. A method for analyzing the qualitative characteristics of functional dependences has been developed, which allows us to choose the optimal interpolation polynomial. With sufficient statistics, using the criteria of consent, it is possible to build mathematical models for the analysis of failure statistics of electronic equipment. Provided that the volume of statistics is not large, such statistics may not be sufficient and the application of consent criteria will lead to unsatisfactory results. Another approach is to use an approximation method that is applied to statistical material that was collected during testing or controlled operation. In this regard, it is extremely important to develop a method for determining the reliability of electronic systems in case of insufficiency of the collected statistics of failures of electronic equipment.


2020 ◽  
Vol 224 (2) ◽  
pp. 1079-1095
Author(s):  
Norbert R Nowaczyk ◽  
Jiabo Liu ◽  
Helge W Arz

SUMMARY Magnetostratigraphic investigation of sediment cores from two different water depths in the SE Black Sea based on discrete samples, and parallel U-channels in one of the cores, yielded high-resolution records of geomagnetic field variations from the past about 68 ka. Age constrains are provided by three tephra layers of known age, accelerator mass spectrometry 14C dating, and by tuning element ratios obtained from X-ray fluorescence scanning to the oxygen isotope record from Greenland ice cores. Sedimentation rates vary from a minimum of ∼5 cm ka−1 in the Holocene to a maximum of ∼50 cm ka−1 in glacial marine isotope stage 4. Completely reversed inclinations and declinations as well as pronounced lows in relative palaeointensity around 41 ka provide evidence for the Laschamps geomagnetic polarity excursion. In one of the investigated cores also a fragmentary record of the Mono Lake excursion at 34.5 ka could be revealed. However, the palaeomagnetic records are more or less affected by greigite, a diagenetically formed magnetic iron sulphide. By definition of an exclusion criterion based on the ratio of saturation magnetization over volume susceptibility, greigite-bearing samples were removed from the palaeomagnetic data. Thus, only 25–55 per cent of the samples were left in the palaeomagnetic records obtained from sediments from the shallower coring site. The palaeomagnetic record from the deeper site, based on both discrete samples and U-channels, is much less affected by greigite. The comparison of palaeomagnetic data shows that the major features of the Laschamps polarity excursion were similarly recovered by both sampling techniques. However, several intervals had to be removed from the U-channel record due to the presence of greigite, carrying anomalous directions. By comparison to discrete sample data, also some directional artefacts in the U-channel record, caused by low-pass filtering of the broad magnetometer response functions, averaging across fast directional and large amplitude changes, can be observed. Therefore, high-resolution sampling with discrete samples should be the preferred technique when fast geomagnetic field variations, such as reversals and excursions, shall be studied from sedimentary records in the very detail.


Author(s):  
Montserrat Jiménez Partearroyo ◽  
Lucía Torres Rivera

Aims: This paper compares two similar studies, run on a sample of Spanish hotel establishments in two different years, 2003 and 2017. The objective was to observe how the two most frequently used information system technologies for companies have evolved over this period: enterprise resource planning (ERP) and E-business systems. Methodology: To run the study, the ERP/E-business matrix model (Norris, Hurley, Hartley, Dunleavy and Balls, 2001) was adopted as a methodology, setting up a number of different scenarios to explain the transformations that take place in companies when they apply these technologies at different stages of development. Results: This process was oriented towards developing the two computer systems, prioritizing e-business over ERP and using vertical solutions. Constraints: The study was run on a discrete sample of Spanish hotels and hotel chains. Practical implications: The constructs used are limited by the instrument and metrics implemented by the company under study for evaluating the quality of customer interaction when a customer reaches a customer service’s call center. Practical implications: ICT has become a key factor for the hotel sector, and people who manage the virtualisation of business need to have a business vision aligned with ICT, with continuous improvement in their systems


2020 ◽  
Vol 17 (2) ◽  
pp. 115-133
Author(s):  
Zachary J. Smith ◽  
J. Eric Bickel

In this paper, we develop strictly proper scoring rules that may be used to evaluate the accuracy of a sequence of probabilistic forecasts. In practice, when forecasts are submitted for multiple uncertainties, competing forecasts are ranked by their cumulative or average score. Alternatively, one could score the implied joint distributions. We demonstrate that these measures of forecast accuracy disagree under some commonly used rules. Furthermore, and most importantly, we show that forecast rankings can depend on the selected scoring procedure. In other words, under some scoring rules, the relative ranking of probabilistic forecasts does not depend solely on the information content of those forecasts and the observed outcome. Instead, the relative ranking of forecasts is a function of the process by which those forecasts are evaluated. As an alternative, we describe additive and strongly additive strictly proper scoring rules, which have the property that the score for the joint distribution is equal to a sum of scores for the associated marginal and conditional distributions. We give methods for constructing additive rules and demonstrate that the logarithmic score is the only strongly additive rule. Finally, we connect the additive properties of scoring rules with analogous properties for a general class of entropy measures.


2020 ◽  
Author(s):  
Antony Pearson ◽  
Manuel E. Lladser

AbstractData taking values on discrete sample spaces are the embodiment of modern biological research. “Omics” experiments produce millions of symbolic outcomes in the form of reads (i.e., DNA sequences of a few dozens to a few hundred nucleotides). Unfortunately, these intrinsically non-numerical datasets are often highly contaminated, and the possible sources of contamination are usually poorly characterized. This contrasts with numerical datasets where Gaussian-type noise is often well-justified. To overcome this hurdle, we introduce the notion of latent weight, which measures the largest expected fraction of samples from a contaminated probabilistic source that conform to a model in a well-structured class of desired models. We examine various properties of latent weights, which we specialize to the class of exchangeable probability distributions. As proof of concept, we analyze DNA methylation data from the 22 human autosome pairs. Contrary to what it is usually assumed, we provide strong evidence that highly specific methylation patterns are overrepresented at some genomic locations when contamination is taken into account.


Sign in / Sign up

Export Citation Format

Share Document